Test Report: Docker_Linux_crio_arm64 22141

                    
                      2191194101c4a9ddc7fa6949616ce2e0ec39dec5:2025-12-16:42801
                    
                

Test fail (43/316)

Order failed test Duration
38 TestAddons/serial/Volcano 0.31
44 TestAddons/parallel/Registry 15.97
45 TestAddons/parallel/RegistryCreds 0.49
46 TestAddons/parallel/Ingress 145.55
47 TestAddons/parallel/InspektorGadget 6.26
48 TestAddons/parallel/MetricsServer 5.39
50 TestAddons/parallel/CSI 44.25
51 TestAddons/parallel/Headlamp 3.23
52 TestAddons/parallel/CloudSpanner 5.32
53 TestAddons/parallel/LocalPath 9.5
54 TestAddons/parallel/NvidiaDevicePlugin 6.35
55 TestAddons/parallel/Yakd 6.26
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 501.88
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 369.09
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 2.39
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 2.45
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 2.52
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 735.41
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 2.2
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 0.07
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 1.7
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 3.18
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 2.33
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 241.66
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 1.4
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.54
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 0.06
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 129.74
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 0.06
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.28
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.26
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.25
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.27
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.24
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 2.5
279 TestMultiControlPlane/serial/RestartCluster 391.41
280 TestMultiControlPlane/serial/DegradedAfterClusterRestart 3.75
282 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 4.69
293 TestJSONOutput/pause/Command 2.51
299 TestJSONOutput/unpause/Command 1.85
358 TestKubernetesUpgrade 784.45
395 TestPause/serial/Pause 7.12
483 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 7200.084
x
+
TestAddons/serial/Volcano (0.31s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-142606 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-142606 addons disable volcano --alsologtostderr -v=1: exit status 11 (303.205906ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 06:16:06.027815 1606215 out.go:360] Setting OutFile to fd 1 ...
	I1216 06:16:06.028752 1606215 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:16:06.028769 1606215 out.go:374] Setting ErrFile to fd 2...
	I1216 06:16:06.028776 1606215 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:16:06.029045 1606215 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 06:16:06.029363 1606215 mustload.go:66] Loading cluster: addons-142606
	I1216 06:16:06.029753 1606215 config.go:182] Loaded profile config "addons-142606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 06:16:06.029772 1606215 addons.go:622] checking whether the cluster is paused
	I1216 06:16:06.029883 1606215 config.go:182] Loaded profile config "addons-142606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 06:16:06.029899 1606215 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:16:06.031138 1606215 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:16:06.048749 1606215 ssh_runner.go:195] Run: systemctl --version
	I1216 06:16:06.048808 1606215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:16:06.066651 1606215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:16:06.167630 1606215 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 06:16:06.167731 1606215 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 06:16:06.202601 1606215 cri.go:89] found id: "6703e84bcca40b0a594cdf475f863c47388bb3473689dc6dc9131665ff15c722"
	I1216 06:16:06.202626 1606215 cri.go:89] found id: "88067168cfc835ec13a085c3301fa7a0945be84d1b32a45d6272d448705a93b4"
	I1216 06:16:06.202632 1606215 cri.go:89] found id: "6731eaf9efe44a3841c9077eb710e80717cb11a2633c9eb508ffa19f6164b80b"
	I1216 06:16:06.202636 1606215 cri.go:89] found id: "0fee244cfec70855196eca6cad232f4f73eacca15c942ffd069d11069d1f4cb4"
	I1216 06:16:06.202639 1606215 cri.go:89] found id: "28c54e5bde7563267d05bd0f6de8f8354c40960cc47cbfeddccee412b7fe46cb"
	I1216 06:16:06.202643 1606215 cri.go:89] found id: "165110b3c17520e8c6e3174b457d4b772ec69f7de1b240977205602417ac9de3"
	I1216 06:16:06.202647 1606215 cri.go:89] found id: "c5f817c74f04cb31c27fa9bc66b75d3c6e1e311d53f849b092257a1349eaad01"
	I1216 06:16:06.202649 1606215 cri.go:89] found id: "0d582f614e063962306cbbdc21f8d638fb8050f967014d21a8f487be01601d41"
	I1216 06:16:06.202653 1606215 cri.go:89] found id: "7818fd4ffad1eb8a31a4b1ce98a30ce679666d9ced58f83c7d6b7fee8bc7af95"
	I1216 06:16:06.202664 1606215 cri.go:89] found id: "a433ea848c0b6c9e22d66f89f96c00d59764c3b355eeeff67fb834e499878a37"
	I1216 06:16:06.202668 1606215 cri.go:89] found id: "161c43bb0c1f09a93b4ab4d3dc787f8c4cc654b82a0e5d755a7b156616025ca2"
	I1216 06:16:06.202672 1606215 cri.go:89] found id: "8abe529e41335b3b05795d10efae252a394efe05ef733112fb2247a1e4fd1c92"
	I1216 06:16:06.202676 1606215 cri.go:89] found id: "a9c90654843486bc32f90e8f985ddbacf205a101da2c3dd5ca31b342bfe712a4"
	I1216 06:16:06.202679 1606215 cri.go:89] found id: "c26c874f20adaf4ff3f5bebc23a86ae73f7dd3cecb90066667bc3a5d3a0c6c19"
	I1216 06:16:06.202682 1606215 cri.go:89] found id: "ce1c26a19229b31cbab4c845e9e6724afb7c05777e5212c27130f447ee267f04"
	I1216 06:16:06.202687 1606215 cri.go:89] found id: "05deafa86a4775c771a7a4f91648e7d0bbbde3e86afb8ded084631f76dadb3ea"
	I1216 06:16:06.202694 1606215 cri.go:89] found id: "3ba24e9ad28c6c3240f0dfc5f6682f61f94d490a15253cca8ed8af56ecef50b8"
	I1216 06:16:06.202704 1606215 cri.go:89] found id: "420ec82bb10934672e188d3b5c75b4015e4ddff1993a56b334897407409b4e9b"
	I1216 06:16:06.202708 1606215 cri.go:89] found id: "200b85d246fd05ce6515e2957bed839b1cd509fbd6973e7fb7b76cfc92dc0e92"
	I1216 06:16:06.202711 1606215 cri.go:89] found id: "df77467f393ab9f56a05b6bda0282ec85b78e7479554e9a66b909f66844386c1"
	I1216 06:16:06.202716 1606215 cri.go:89] found id: "579811cebcc8368d345e24f90d842a2c3691b61c760bea541d93287864a6257a"
	I1216 06:16:06.202719 1606215 cri.go:89] found id: "f245307e594fbb88a44a0deec519111b1a88c9ff3bfc81884eb0fff4916d96b2"
	I1216 06:16:06.202722 1606215 cri.go:89] found id: "c9eb26e694306fb2badad1b156e8c43cd7669aeea899bdaf4f5005d8c36ce56e"
	I1216 06:16:06.202726 1606215 cri.go:89] found id: ""
	I1216 06:16:06.202778 1606215 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 06:16:06.223869 1606215 out.go:203] 
	W1216 06:16:06.227067 1606215 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T06:16:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T06:16:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 06:16:06.227097 1606215 out.go:285] * 
	* 
	W1216 06:16:06.234654 1606215 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 06:16:06.238762 1606215 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-142606 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.31s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 15.063075ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-prj95" [839dbbf2-5df3-4b37-903a-7afbee677045] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.002834324s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-qh7wq" [bc3adfb9-13a5-45c6-9047-8a229431de86] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003380991s
addons_test.go:394: (dbg) Run:  kubectl --context addons-142606 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-142606 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-142606 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.334095845s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-142606 ip
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-142606 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-142606 addons disable registry --alsologtostderr -v=1: exit status 11 (322.672853ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 06:16:33.321239 1607163 out.go:360] Setting OutFile to fd 1 ...
	I1216 06:16:33.321945 1607163 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:16:33.321962 1607163 out.go:374] Setting ErrFile to fd 2...
	I1216 06:16:33.321969 1607163 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:16:33.322266 1607163 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 06:16:33.323417 1607163 mustload.go:66] Loading cluster: addons-142606
	I1216 06:16:33.323836 1607163 config.go:182] Loaded profile config "addons-142606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 06:16:33.323857 1607163 addons.go:622] checking whether the cluster is paused
	I1216 06:16:33.323975 1607163 config.go:182] Loaded profile config "addons-142606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 06:16:33.323992 1607163 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:16:33.324580 1607163 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:16:33.343809 1607163 ssh_runner.go:195] Run: systemctl --version
	I1216 06:16:33.343870 1607163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:16:33.365376 1607163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:16:33.483188 1607163 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 06:16:33.483283 1607163 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 06:16:33.534133 1607163 cri.go:89] found id: "6703e84bcca40b0a594cdf475f863c47388bb3473689dc6dc9131665ff15c722"
	I1216 06:16:33.534158 1607163 cri.go:89] found id: "88067168cfc835ec13a085c3301fa7a0945be84d1b32a45d6272d448705a93b4"
	I1216 06:16:33.534164 1607163 cri.go:89] found id: "6731eaf9efe44a3841c9077eb710e80717cb11a2633c9eb508ffa19f6164b80b"
	I1216 06:16:33.534168 1607163 cri.go:89] found id: "0fee244cfec70855196eca6cad232f4f73eacca15c942ffd069d11069d1f4cb4"
	I1216 06:16:33.534171 1607163 cri.go:89] found id: "28c54e5bde7563267d05bd0f6de8f8354c40960cc47cbfeddccee412b7fe46cb"
	I1216 06:16:33.534181 1607163 cri.go:89] found id: "165110b3c17520e8c6e3174b457d4b772ec69f7de1b240977205602417ac9de3"
	I1216 06:16:33.534185 1607163 cri.go:89] found id: "c5f817c74f04cb31c27fa9bc66b75d3c6e1e311d53f849b092257a1349eaad01"
	I1216 06:16:33.534188 1607163 cri.go:89] found id: "0d582f614e063962306cbbdc21f8d638fb8050f967014d21a8f487be01601d41"
	I1216 06:16:33.534191 1607163 cri.go:89] found id: "7818fd4ffad1eb8a31a4b1ce98a30ce679666d9ced58f83c7d6b7fee8bc7af95"
	I1216 06:16:33.534198 1607163 cri.go:89] found id: "a433ea848c0b6c9e22d66f89f96c00d59764c3b355eeeff67fb834e499878a37"
	I1216 06:16:33.534201 1607163 cri.go:89] found id: "161c43bb0c1f09a93b4ab4d3dc787f8c4cc654b82a0e5d755a7b156616025ca2"
	I1216 06:16:33.534205 1607163 cri.go:89] found id: "8abe529e41335b3b05795d10efae252a394efe05ef733112fb2247a1e4fd1c92"
	I1216 06:16:33.534208 1607163 cri.go:89] found id: "a9c90654843486bc32f90e8f985ddbacf205a101da2c3dd5ca31b342bfe712a4"
	I1216 06:16:33.534212 1607163 cri.go:89] found id: "c26c874f20adaf4ff3f5bebc23a86ae73f7dd3cecb90066667bc3a5d3a0c6c19"
	I1216 06:16:33.534215 1607163 cri.go:89] found id: "ce1c26a19229b31cbab4c845e9e6724afb7c05777e5212c27130f447ee267f04"
	I1216 06:16:33.534235 1607163 cri.go:89] found id: "05deafa86a4775c771a7a4f91648e7d0bbbde3e86afb8ded084631f76dadb3ea"
	I1216 06:16:33.534239 1607163 cri.go:89] found id: "3ba24e9ad28c6c3240f0dfc5f6682f61f94d490a15253cca8ed8af56ecef50b8"
	I1216 06:16:33.534244 1607163 cri.go:89] found id: "420ec82bb10934672e188d3b5c75b4015e4ddff1993a56b334897407409b4e9b"
	I1216 06:16:33.534247 1607163 cri.go:89] found id: "200b85d246fd05ce6515e2957bed839b1cd509fbd6973e7fb7b76cfc92dc0e92"
	I1216 06:16:33.534250 1607163 cri.go:89] found id: "df77467f393ab9f56a05b6bda0282ec85b78e7479554e9a66b909f66844386c1"
	I1216 06:16:33.534255 1607163 cri.go:89] found id: "579811cebcc8368d345e24f90d842a2c3691b61c760bea541d93287864a6257a"
	I1216 06:16:33.534262 1607163 cri.go:89] found id: "f245307e594fbb88a44a0deec519111b1a88c9ff3bfc81884eb0fff4916d96b2"
	I1216 06:16:33.534265 1607163 cri.go:89] found id: "c9eb26e694306fb2badad1b156e8c43cd7669aeea899bdaf4f5005d8c36ce56e"
	I1216 06:16:33.534268 1607163 cri.go:89] found id: ""
	I1216 06:16:33.534321 1607163 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 06:16:33.553142 1607163 out.go:203] 
	W1216 06:16:33.556232 1607163 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T06:16:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T06:16:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 06:16:33.556259 1607163 out.go:285] * 
	* 
	W1216 06:16:33.563906 1607163 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 06:16:33.567592 1607163 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-142606 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (15.97s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.49s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 4.945578ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-142606
addons_test.go:334: (dbg) Run:  kubectl --context addons-142606 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-142606 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-142606 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (258.579409ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 06:17:33.731759 1608771 out.go:360] Setting OutFile to fd 1 ...
	I1216 06:17:33.733358 1608771 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:17:33.733413 1608771 out.go:374] Setting ErrFile to fd 2...
	I1216 06:17:33.733434 1608771 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:17:33.733763 1608771 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 06:17:33.734108 1608771 mustload.go:66] Loading cluster: addons-142606
	I1216 06:17:33.734615 1608771 config.go:182] Loaded profile config "addons-142606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 06:17:33.734664 1608771 addons.go:622] checking whether the cluster is paused
	I1216 06:17:33.734825 1608771 config.go:182] Loaded profile config "addons-142606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 06:17:33.734864 1608771 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:17:33.735435 1608771 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:17:33.753243 1608771 ssh_runner.go:195] Run: systemctl --version
	I1216 06:17:33.753299 1608771 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:17:33.781162 1608771 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:17:33.875076 1608771 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 06:17:33.875151 1608771 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 06:17:33.909174 1608771 cri.go:89] found id: "6703e84bcca40b0a594cdf475f863c47388bb3473689dc6dc9131665ff15c722"
	I1216 06:17:33.909196 1608771 cri.go:89] found id: "88067168cfc835ec13a085c3301fa7a0945be84d1b32a45d6272d448705a93b4"
	I1216 06:17:33.909202 1608771 cri.go:89] found id: "6731eaf9efe44a3841c9077eb710e80717cb11a2633c9eb508ffa19f6164b80b"
	I1216 06:17:33.909206 1608771 cri.go:89] found id: "0fee244cfec70855196eca6cad232f4f73eacca15c942ffd069d11069d1f4cb4"
	I1216 06:17:33.909209 1608771 cri.go:89] found id: "28c54e5bde7563267d05bd0f6de8f8354c40960cc47cbfeddccee412b7fe46cb"
	I1216 06:17:33.909213 1608771 cri.go:89] found id: "165110b3c17520e8c6e3174b457d4b772ec69f7de1b240977205602417ac9de3"
	I1216 06:17:33.909217 1608771 cri.go:89] found id: "c5f817c74f04cb31c27fa9bc66b75d3c6e1e311d53f849b092257a1349eaad01"
	I1216 06:17:33.909219 1608771 cri.go:89] found id: "0d582f614e063962306cbbdc21f8d638fb8050f967014d21a8f487be01601d41"
	I1216 06:17:33.909222 1608771 cri.go:89] found id: "7818fd4ffad1eb8a31a4b1ce98a30ce679666d9ced58f83c7d6b7fee8bc7af95"
	I1216 06:17:33.909230 1608771 cri.go:89] found id: "a433ea848c0b6c9e22d66f89f96c00d59764c3b355eeeff67fb834e499878a37"
	I1216 06:17:33.909237 1608771 cri.go:89] found id: "161c43bb0c1f09a93b4ab4d3dc787f8c4cc654b82a0e5d755a7b156616025ca2"
	I1216 06:17:33.909240 1608771 cri.go:89] found id: "8abe529e41335b3b05795d10efae252a394efe05ef733112fb2247a1e4fd1c92"
	I1216 06:17:33.909244 1608771 cri.go:89] found id: "a9c90654843486bc32f90e8f985ddbacf205a101da2c3dd5ca31b342bfe712a4"
	I1216 06:17:33.909247 1608771 cri.go:89] found id: "c26c874f20adaf4ff3f5bebc23a86ae73f7dd3cecb90066667bc3a5d3a0c6c19"
	I1216 06:17:33.909251 1608771 cri.go:89] found id: "ce1c26a19229b31cbab4c845e9e6724afb7c05777e5212c27130f447ee267f04"
	I1216 06:17:33.909262 1608771 cri.go:89] found id: "05deafa86a4775c771a7a4f91648e7d0bbbde3e86afb8ded084631f76dadb3ea"
	I1216 06:17:33.909265 1608771 cri.go:89] found id: "3ba24e9ad28c6c3240f0dfc5f6682f61f94d490a15253cca8ed8af56ecef50b8"
	I1216 06:17:33.909275 1608771 cri.go:89] found id: "420ec82bb10934672e188d3b5c75b4015e4ddff1993a56b334897407409b4e9b"
	I1216 06:17:33.909279 1608771 cri.go:89] found id: "200b85d246fd05ce6515e2957bed839b1cd509fbd6973e7fb7b76cfc92dc0e92"
	I1216 06:17:33.909283 1608771 cri.go:89] found id: "df77467f393ab9f56a05b6bda0282ec85b78e7479554e9a66b909f66844386c1"
	I1216 06:17:33.909287 1608771 cri.go:89] found id: "579811cebcc8368d345e24f90d842a2c3691b61c760bea541d93287864a6257a"
	I1216 06:17:33.909294 1608771 cri.go:89] found id: "f245307e594fbb88a44a0deec519111b1a88c9ff3bfc81884eb0fff4916d96b2"
	I1216 06:17:33.909297 1608771 cri.go:89] found id: "c9eb26e694306fb2badad1b156e8c43cd7669aeea899bdaf4f5005d8c36ce56e"
	I1216 06:17:33.909300 1608771 cri.go:89] found id: ""
	I1216 06:17:33.909350 1608771 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 06:17:33.924355 1608771 out.go:203] 
	W1216 06:17:33.927286 1608771 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T06:17:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T06:17:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 06:17:33.927314 1608771 out.go:285] * 
	* 
	W1216 06:17:33.935098 1608771 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 06:17:33.938080 1608771 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-142606 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.49s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (145.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-142606 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-142606 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-142606 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [0688d046-42d8-4258-af69-b8c641626fd7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [0688d046-42d8-4258-af69-b8c641626fd7] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003775481s
I1216 06:16:54.921980 1599255 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-142606 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-142606 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.812819143s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-142606 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-142606 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-142606
helpers_test.go:244: (dbg) docker inspect addons-142606:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bf001fc7b73962d265e364d5dc0a0431f53d593dbeb39b7f63abe2349353c726",
	        "Created": "2025-12-16T06:13:40.999815489Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1600649,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T06:13:41.068786507Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/bf001fc7b73962d265e364d5dc0a0431f53d593dbeb39b7f63abe2349353c726/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bf001fc7b73962d265e364d5dc0a0431f53d593dbeb39b7f63abe2349353c726/hostname",
	        "HostsPath": "/var/lib/docker/containers/bf001fc7b73962d265e364d5dc0a0431f53d593dbeb39b7f63abe2349353c726/hosts",
	        "LogPath": "/var/lib/docker/containers/bf001fc7b73962d265e364d5dc0a0431f53d593dbeb39b7f63abe2349353c726/bf001fc7b73962d265e364d5dc0a0431f53d593dbeb39b7f63abe2349353c726-json.log",
	        "Name": "/addons-142606",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-142606:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-142606",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bf001fc7b73962d265e364d5dc0a0431f53d593dbeb39b7f63abe2349353c726",
	                "LowerDir": "/var/lib/docker/overlay2/287315b8b7ed6b8475b7a96b373d7b2b829ce5fd0faa6eca67651cbe6bd9badf-init/diff:/var/lib/docker/overlay2/bf9e5e3f04a34ae52d17b5e81aeacb3854428b2bda7b4fcb7e1d86558db759ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/287315b8b7ed6b8475b7a96b373d7b2b829ce5fd0faa6eca67651cbe6bd9badf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/287315b8b7ed6b8475b7a96b373d7b2b829ce5fd0faa6eca67651cbe6bd9badf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/287315b8b7ed6b8475b7a96b373d7b2b829ce5fd0faa6eca67651cbe6bd9badf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-142606",
	                "Source": "/var/lib/docker/volumes/addons-142606/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-142606",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-142606",
	                "name.minikube.sigs.k8s.io": "addons-142606",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7a54c8801a155c7108fb424e85c2dd89dbbbe83437dfab238ac7e6a5ec1147ca",
	            "SandboxKey": "/var/run/docker/netns/7a54c8801a15",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34245"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34246"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34249"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34247"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34248"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-142606": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:e9:58:62:05:52",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "234aad51cbf5e49e54b2e21134f415ba87de220494e4f6151e070cebaa7dbe13",
	                    "EndpointID": "e6041f18e5257b5e5386a97f0516adb8883cac32326cd3c81566d2a8f70b1315",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-142606",
	                        "bf001fc7b739"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-142606 -n addons-142606
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p addons-142606 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p addons-142606 logs -n 25: (1.452028664s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-840918                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-840918 │ jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ start   │ --download-only -p binary-mirror-707915 --alsologtostderr --binary-mirror http://127.0.0.1:38553 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-707915   │ jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │                     │
	│ delete  │ -p binary-mirror-707915                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-707915   │ jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ addons  │ enable dashboard -p addons-142606                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-142606          │ jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │                     │
	│ addons  │ disable dashboard -p addons-142606                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-142606          │ jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │                     │
	│ start   │ -p addons-142606 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-142606          │ jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:16 UTC │
	│ addons  │ addons-142606 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-142606          │ jenkins │ v1.37.0 │ 16 Dec 25 06:16 UTC │                     │
	│ addons  │ addons-142606 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-142606          │ jenkins │ v1.37.0 │ 16 Dec 25 06:16 UTC │                     │
	│ addons  │ enable headlamp -p addons-142606 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-142606          │ jenkins │ v1.37.0 │ 16 Dec 25 06:16 UTC │                     │
	│ addons  │ addons-142606 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-142606          │ jenkins │ v1.37.0 │ 16 Dec 25 06:16 UTC │                     │
	│ addons  │ addons-142606 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-142606          │ jenkins │ v1.37.0 │ 16 Dec 25 06:16 UTC │                     │
	│ ip      │ addons-142606 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-142606          │ jenkins │ v1.37.0 │ 16 Dec 25 06:16 UTC │ 16 Dec 25 06:16 UTC │
	│ addons  │ addons-142606 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-142606          │ jenkins │ v1.37.0 │ 16 Dec 25 06:16 UTC │                     │
	│ addons  │ addons-142606 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-142606          │ jenkins │ v1.37.0 │ 16 Dec 25 06:16 UTC │                     │
	│ addons  │ addons-142606 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-142606          │ jenkins │ v1.37.0 │ 16 Dec 25 06:16 UTC │                     │
	│ ssh     │ addons-142606 ssh cat /opt/local-path-provisioner/pvc-a3d2f4c9-1b88-4e22-b605-5f5f6ef7354e_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-142606          │ jenkins │ v1.37.0 │ 16 Dec 25 06:16 UTC │ 16 Dec 25 06:16 UTC │
	│ addons  │ addons-142606 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-142606          │ jenkins │ v1.37.0 │ 16 Dec 25 06:16 UTC │                     │
	│ addons  │ addons-142606 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-142606          │ jenkins │ v1.37.0 │ 16 Dec 25 06:16 UTC │                     │
	│ ssh     │ addons-142606 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-142606          │ jenkins │ v1.37.0 │ 16 Dec 25 06:16 UTC │                     │
	│ addons  │ addons-142606 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-142606          │ jenkins │ v1.37.0 │ 16 Dec 25 06:17 UTC │                     │
	│ addons  │ addons-142606 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-142606          │ jenkins │ v1.37.0 │ 16 Dec 25 06:17 UTC │                     │
	│ addons  │ addons-142606 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-142606          │ jenkins │ v1.37.0 │ 16 Dec 25 06:17 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-142606                                                                                                                                                                                                                                                                                                                                                                                           │ addons-142606          │ jenkins │ v1.37.0 │ 16 Dec 25 06:17 UTC │ 16 Dec 25 06:17 UTC │
	│ addons  │ addons-142606 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-142606          │ jenkins │ v1.37.0 │ 16 Dec 25 06:17 UTC │                     │
	│ ip      │ addons-142606 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-142606          │ jenkins │ v1.37.0 │ 16 Dec 25 06:19 UTC │ 16 Dec 25 06:19 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 06:13:15
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 06:13:15.957789 1600247 out.go:360] Setting OutFile to fd 1 ...
	I1216 06:13:15.957948 1600247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:13:15.958081 1600247 out.go:374] Setting ErrFile to fd 2...
	I1216 06:13:15.958092 1600247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:13:15.958409 1600247 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 06:13:15.958928 1600247 out.go:368] Setting JSON to false
	I1216 06:13:15.959774 1600247 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":32147,"bootTime":1765833449,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1216 06:13:15.959850 1600247 start.go:143] virtualization:  
	I1216 06:13:15.963591 1600247 out.go:179] * [addons-142606] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 06:13:15.967676 1600247 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 06:13:15.967838 1600247 notify.go:221] Checking for updates...
	I1216 06:13:15.974456 1600247 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 06:13:15.977588 1600247 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:13:15.980782 1600247 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube
	I1216 06:13:15.983903 1600247 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 06:13:15.987010 1600247 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 06:13:15.990282 1600247 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 06:13:16.032354 1600247 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 06:13:16.032520 1600247 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:13:16.088857 1600247 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-16 06:13:16.079088735 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 06:13:16.088962 1600247 docker.go:319] overlay module found
	I1216 06:13:16.092105 1600247 out.go:179] * Using the docker driver based on user configuration
	I1216 06:13:16.095059 1600247 start.go:309] selected driver: docker
	I1216 06:13:16.095089 1600247 start.go:927] validating driver "docker" against <nil>
	I1216 06:13:16.095103 1600247 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 06:13:16.095864 1600247 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:13:16.154674 1600247 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-16 06:13:16.145692237 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 06:13:16.154829 1600247 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 06:13:16.155047 1600247 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 06:13:16.157997 1600247 out.go:179] * Using Docker driver with root privileges
	I1216 06:13:16.160801 1600247 cni.go:84] Creating CNI manager for ""
	I1216 06:13:16.160873 1600247 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 06:13:16.160886 1600247 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 06:13:16.160962 1600247 start.go:353] cluster config:
	{Name:addons-142606 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-142606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1216 06:13:16.164155 1600247 out.go:179] * Starting "addons-142606" primary control-plane node in "addons-142606" cluster
	I1216 06:13:16.166915 1600247 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 06:13:16.169822 1600247 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 06:13:16.172567 1600247 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 06:13:16.172620 1600247 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1216 06:13:16.172640 1600247 cache.go:65] Caching tarball of preloaded images
	I1216 06:13:16.172665 1600247 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 06:13:16.172732 1600247 preload.go:238] Found /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 06:13:16.172743 1600247 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 06:13:16.173094 1600247 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/config.json ...
	I1216 06:13:16.173125 1600247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/config.json: {Name:mkdf2c59ee60ef020b4de8eb68942a1833c1c127 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:13:16.189582 1600247 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1216 06:13:16.189712 1600247 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory
	I1216 06:13:16.189738 1600247 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory, skipping pull
	I1216 06:13:16.189744 1600247 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in cache, skipping pull
	I1216 06:13:16.189751 1600247 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 as a tarball
	I1216 06:13:16.189760 1600247 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 from local cache
	I1216 06:13:34.624340 1600247 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 from cached tarball
	I1216 06:13:34.624401 1600247 cache.go:243] Successfully downloaded all kic artifacts
	I1216 06:13:34.624433 1600247 start.go:360] acquireMachinesLock for addons-142606: {Name:mk5d421a8bc03800bd0474a647fe31f4b3011418 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:13:34.624603 1600247 start.go:364] duration metric: took 145.052µs to acquireMachinesLock for "addons-142606"
	I1216 06:13:34.624644 1600247 start.go:93] Provisioning new machine with config: &{Name:addons-142606 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-142606 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 06:13:34.624720 1600247 start.go:125] createHost starting for "" (driver="docker")
	I1216 06:13:34.628195 1600247 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1216 06:13:34.628455 1600247 start.go:159] libmachine.API.Create for "addons-142606" (driver="docker")
	I1216 06:13:34.628509 1600247 client.go:173] LocalClient.Create starting
	I1216 06:13:34.628631 1600247 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem
	I1216 06:13:35.028113 1600247 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem
	I1216 06:13:35.123794 1600247 cli_runner.go:164] Run: docker network inspect addons-142606 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 06:13:35.140101 1600247 cli_runner.go:211] docker network inspect addons-142606 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 06:13:35.140202 1600247 network_create.go:284] running [docker network inspect addons-142606] to gather additional debugging logs...
	I1216 06:13:35.140225 1600247 cli_runner.go:164] Run: docker network inspect addons-142606
	W1216 06:13:35.155094 1600247 cli_runner.go:211] docker network inspect addons-142606 returned with exit code 1
	I1216 06:13:35.155129 1600247 network_create.go:287] error running [docker network inspect addons-142606]: docker network inspect addons-142606: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-142606 not found
	I1216 06:13:35.155143 1600247 network_create.go:289] output of [docker network inspect addons-142606]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-142606 not found
	
	** /stderr **
	I1216 06:13:35.155251 1600247 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 06:13:35.172235 1600247 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b14410}
	I1216 06:13:35.172278 1600247 network_create.go:124] attempt to create docker network addons-142606 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1216 06:13:35.172343 1600247 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-142606 addons-142606
	I1216 06:13:35.237080 1600247 network_create.go:108] docker network addons-142606 192.168.49.0/24 created
	I1216 06:13:35.237114 1600247 kic.go:121] calculated static IP "192.168.49.2" for the "addons-142606" container
	I1216 06:13:35.237193 1600247 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 06:13:35.256202 1600247 cli_runner.go:164] Run: docker volume create addons-142606 --label name.minikube.sigs.k8s.io=addons-142606 --label created_by.minikube.sigs.k8s.io=true
	I1216 06:13:35.275952 1600247 oci.go:103] Successfully created a docker volume addons-142606
	I1216 06:13:35.276064 1600247 cli_runner.go:164] Run: docker run --rm --name addons-142606-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-142606 --entrypoint /usr/bin/test -v addons-142606:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1216 06:13:36.942675 1600247 cli_runner.go:217] Completed: docker run --rm --name addons-142606-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-142606 --entrypoint /usr/bin/test -v addons-142606:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib: (1.666570189s)
	I1216 06:13:36.942722 1600247 oci.go:107] Successfully prepared a docker volume addons-142606
	I1216 06:13:36.942764 1600247 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 06:13:36.942777 1600247 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 06:13:36.942840 1600247 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-142606:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 06:13:40.927804 1600247 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-142606:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (3.984922902s)
	I1216 06:13:40.927838 1600247 kic.go:203] duration metric: took 3.985057967s to extract preloaded images to volume ...
	W1216 06:13:40.927996 1600247 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1216 06:13:40.928118 1600247 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 06:13:40.985038 1600247 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-142606 --name addons-142606 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-142606 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-142606 --network addons-142606 --ip 192.168.49.2 --volume addons-142606:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1216 06:13:41.297519 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Running}}
	I1216 06:13:41.322866 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:13:41.352156 1600247 cli_runner.go:164] Run: docker exec addons-142606 stat /var/lib/dpkg/alternatives/iptables
	I1216 06:13:41.410114 1600247 oci.go:144] the created container "addons-142606" has a running status.
	I1216 06:13:41.410140 1600247 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa...
	I1216 06:13:41.694055 1600247 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 06:13:41.715336 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:13:41.744187 1600247 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 06:13:41.744208 1600247 kic_runner.go:114] Args: [docker exec --privileged addons-142606 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 06:13:41.811615 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:13:41.837667 1600247 machine.go:94] provisionDockerMachine start ...
	I1216 06:13:41.837769 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:13:41.859406 1600247 main.go:143] libmachine: Using SSH client type: native
	I1216 06:13:41.859744 1600247 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34245 <nil> <nil>}
	I1216 06:13:41.859753 1600247 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 06:13:41.860383 1600247 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1216 06:13:44.991981 1600247 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-142606
	
	I1216 06:13:44.992006 1600247 ubuntu.go:182] provisioning hostname "addons-142606"
	I1216 06:13:44.992092 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:13:45.038191 1600247 main.go:143] libmachine: Using SSH client type: native
	I1216 06:13:45.038533 1600247 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34245 <nil> <nil>}
	I1216 06:13:45.038545 1600247 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-142606 && echo "addons-142606" | sudo tee /etc/hostname
	I1216 06:13:45.248125 1600247 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-142606
	
	I1216 06:13:45.248306 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:13:45.278587 1600247 main.go:143] libmachine: Using SSH client type: native
	I1216 06:13:45.278931 1600247 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34245 <nil> <nil>}
	I1216 06:13:45.278957 1600247 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-142606' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-142606/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-142606' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 06:13:45.420777 1600247 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 06:13:45.420807 1600247 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-1596013/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-1596013/.minikube}
	I1216 06:13:45.420834 1600247 ubuntu.go:190] setting up certificates
	I1216 06:13:45.420851 1600247 provision.go:84] configureAuth start
	I1216 06:13:45.420924 1600247 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-142606
	I1216 06:13:45.437479 1600247 provision.go:143] copyHostCerts
	I1216 06:13:45.437564 1600247 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem (1078 bytes)
	I1216 06:13:45.437704 1600247 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem (1123 bytes)
	I1216 06:13:45.437780 1600247 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem (1675 bytes)
	I1216 06:13:45.437848 1600247 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem org=jenkins.addons-142606 san=[127.0.0.1 192.168.49.2 addons-142606 localhost minikube]
	I1216 06:13:45.597072 1600247 provision.go:177] copyRemoteCerts
	I1216 06:13:45.597146 1600247 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 06:13:45.597191 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:13:45.614392 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:13:45.708520 1600247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 06:13:45.727452 1600247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1216 06:13:45.744959 1600247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 06:13:45.762424 1600247 provision.go:87] duration metric: took 341.544865ms to configureAuth
	I1216 06:13:45.762455 1600247 ubuntu.go:206] setting minikube options for container-runtime
	I1216 06:13:45.762648 1600247 config.go:182] Loaded profile config "addons-142606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 06:13:45.762755 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:13:45.780261 1600247 main.go:143] libmachine: Using SSH client type: native
	I1216 06:13:45.780651 1600247 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34245 <nil> <nil>}
	I1216 06:13:45.780676 1600247 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 06:13:46.055816 1600247 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 06:13:46.055837 1600247 machine.go:97] duration metric: took 4.218150695s to provisionDockerMachine
	I1216 06:13:46.055849 1600247 client.go:176] duration metric: took 11.427326544s to LocalClient.Create
	I1216 06:13:46.055863 1600247 start.go:167] duration metric: took 11.427410106s to libmachine.API.Create "addons-142606"
	I1216 06:13:46.055870 1600247 start.go:293] postStartSetup for "addons-142606" (driver="docker")
	I1216 06:13:46.055891 1600247 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 06:13:46.056310 1600247 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 06:13:46.056375 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:13:46.077412 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:13:46.176974 1600247 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 06:13:46.180563 1600247 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 06:13:46.180594 1600247 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 06:13:46.180607 1600247 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/addons for local assets ...
	I1216 06:13:46.180679 1600247 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/files for local assets ...
	I1216 06:13:46.180708 1600247 start.go:296] duration metric: took 124.832333ms for postStartSetup
	I1216 06:13:46.181033 1600247 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-142606
	I1216 06:13:46.198232 1600247 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/config.json ...
	I1216 06:13:46.198525 1600247 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 06:13:46.198580 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:13:46.215880 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:13:46.309542 1600247 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 06:13:46.314242 1600247 start.go:128] duration metric: took 11.689505476s to createHost
	I1216 06:13:46.314267 1600247 start.go:83] releasing machines lock for "addons-142606", held for 11.689648559s
	I1216 06:13:46.314336 1600247 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-142606
	I1216 06:13:46.332111 1600247 ssh_runner.go:195] Run: cat /version.json
	I1216 06:13:46.332134 1600247 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 06:13:46.332167 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:13:46.332201 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:13:46.357911 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:13:46.358066 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:13:46.453003 1600247 ssh_runner.go:195] Run: systemctl --version
	I1216 06:13:46.542827 1600247 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 06:13:46.582376 1600247 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 06:13:46.586750 1600247 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 06:13:46.586827 1600247 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 06:13:46.615874 1600247 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1216 06:13:46.615946 1600247 start.go:496] detecting cgroup driver to use...
	I1216 06:13:46.615993 1600247 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:13:46.616069 1600247 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 06:13:46.633768 1600247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:13:46.646297 1600247 docker.go:218] disabling cri-docker service (if available) ...
	I1216 06:13:46.646359 1600247 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 06:13:46.664082 1600247 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 06:13:46.684314 1600247 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 06:13:46.809574 1600247 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 06:13:46.935484 1600247 docker.go:234] disabling docker service ...
	I1216 06:13:46.935553 1600247 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 06:13:46.956253 1600247 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 06:13:46.969621 1600247 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 06:13:47.096447 1600247 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 06:13:47.216231 1600247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 06:13:47.229666 1600247 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 06:13:47.243767 1600247 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 06:13:47.243887 1600247 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:13:47.253289 1600247 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 06:13:47.253390 1600247 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:13:47.262288 1600247 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:13:47.270876 1600247 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:13:47.279512 1600247 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 06:13:47.287824 1600247 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:13:47.297134 1600247 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:13:47.310460 1600247 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:13:47.319536 1600247 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 06:13:47.327407 1600247 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 06:13:47.334992 1600247 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:13:47.454748 1600247 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 06:13:47.639050 1600247 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 06:13:47.639153 1600247 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 06:13:47.643005 1600247 start.go:564] Will wait 60s for crictl version
	I1216 06:13:47.643074 1600247 ssh_runner.go:195] Run: which crictl
	I1216 06:13:47.646818 1600247 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 06:13:47.672433 1600247 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 06:13:47.672592 1600247 ssh_runner.go:195] Run: crio --version
	I1216 06:13:47.701586 1600247 ssh_runner.go:195] Run: crio --version
	I1216 06:13:47.733744 1600247 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 06:13:47.736558 1600247 cli_runner.go:164] Run: docker network inspect addons-142606 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 06:13:47.753055 1600247 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 06:13:47.756926 1600247 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 06:13:47.767108 1600247 kubeadm.go:884] updating cluster {Name:addons-142606 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-142606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 06:13:47.767234 1600247 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 06:13:47.767297 1600247 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 06:13:47.804154 1600247 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 06:13:47.804180 1600247 crio.go:433] Images already preloaded, skipping extraction
	I1216 06:13:47.804239 1600247 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 06:13:47.829727 1600247 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 06:13:47.829750 1600247 cache_images.go:86] Images are preloaded, skipping loading
	I1216 06:13:47.829758 1600247 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1216 06:13:47.829847 1600247 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-142606 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-142606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 06:13:47.829940 1600247 ssh_runner.go:195] Run: crio config
	I1216 06:13:47.891767 1600247 cni.go:84] Creating CNI manager for ""
	I1216 06:13:47.891842 1600247 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 06:13:47.891881 1600247 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 06:13:47.891938 1600247 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-142606 NodeName:addons-142606 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 06:13:47.892108 1600247 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-142606"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 06:13:47.892231 1600247 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 06:13:47.900130 1600247 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 06:13:47.900204 1600247 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 06:13:47.907870 1600247 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1216 06:13:47.923622 1600247 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 06:13:47.937631 1600247 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1216 06:13:47.950980 1600247 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 06:13:47.954774 1600247 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 06:13:47.964685 1600247 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:13:48.089451 1600247 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:13:48.106378 1600247 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606 for IP: 192.168.49.2
	I1216 06:13:48.106398 1600247 certs.go:195] generating shared ca certs ...
	I1216 06:13:48.106415 1600247 certs.go:227] acquiring lock for ca certs: {Name:mkbf72d2e438185e2867d262e148d82e5455cccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:13:48.106544 1600247 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key
	I1216 06:13:48.641897 1600247 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt ...
	I1216 06:13:48.641932 1600247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt: {Name:mkf46262e02ea2028a456580d90b50f2340dbb4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:13:48.642129 1600247 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key ...
	I1216 06:13:48.642142 1600247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key: {Name:mkc8a5e2655ac158b6734542ce846c672953403b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:13:48.642228 1600247 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key
	I1216 06:13:48.823105 1600247 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt ...
	I1216 06:13:48.823137 1600247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt: {Name:mkdd726e6e1143a3b07e9bd935c2a97714506c12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:13:48.823303 1600247 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key ...
	I1216 06:13:48.823322 1600247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key: {Name:mk0df7b0c6d0f510dade0ec4ce39add2134f0c38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:13:48.823410 1600247 certs.go:257] generating profile certs ...
	I1216 06:13:48.823468 1600247 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.key
	I1216 06:13:48.823485 1600247 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt with IP's: []
	I1216 06:13:48.862234 1600247 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt ...
	I1216 06:13:48.862276 1600247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt: {Name:mk89b5e5cac16d069a5128404c05bea70625da4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:13:48.862443 1600247 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.key ...
	I1216 06:13:48.862457 1600247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.key: {Name:mk074b38d08c2a224c17463efdb2bafa16ad65a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:13:48.862542 1600247 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/apiserver.key.4c3e25f6
	I1216 06:13:48.862560 1600247 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/apiserver.crt.4c3e25f6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1216 06:13:49.223578 1600247 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/apiserver.crt.4c3e25f6 ...
	I1216 06:13:49.223610 1600247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/apiserver.crt.4c3e25f6: {Name:mk891d97de47c0a7b810a8597cbbf7ed57b5d12a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:13:49.223794 1600247 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/apiserver.key.4c3e25f6 ...
	I1216 06:13:49.223809 1600247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/apiserver.key.4c3e25f6: {Name:mkbaf2da6fd5895ff4a1607b98115c6179c9bc5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:13:49.223893 1600247 certs.go:382] copying /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/apiserver.crt.4c3e25f6 -> /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/apiserver.crt
	I1216 06:13:49.223974 1600247 certs.go:386] copying /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/apiserver.key.4c3e25f6 -> /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/apiserver.key
	I1216 06:13:49.224032 1600247 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/proxy-client.key
	I1216 06:13:49.224054 1600247 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/proxy-client.crt with IP's: []
	I1216 06:13:49.515298 1600247 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/proxy-client.crt ...
	I1216 06:13:49.515334 1600247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/proxy-client.crt: {Name:mk9b7df4149406e4d3144a3c55b374da4eaa475f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:13:49.515506 1600247 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/proxy-client.key ...
	I1216 06:13:49.515524 1600247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/proxy-client.key: {Name:mk33c984c867d87faf2e534025dadf476be4340e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:13:49.515703 1600247 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 06:13:49.515749 1600247 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem (1078 bytes)
	I1216 06:13:49.515780 1600247 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem (1123 bytes)
	I1216 06:13:49.515815 1600247 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem (1675 bytes)
	I1216 06:13:49.516396 1600247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 06:13:49.535271 1600247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 06:13:49.553393 1600247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 06:13:49.573721 1600247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 06:13:49.591294 1600247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 06:13:49.609259 1600247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 06:13:49.626500 1600247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 06:13:49.644089 1600247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 06:13:49.662625 1600247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 06:13:49.680051 1600247 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 06:13:49.692551 1600247 ssh_runner.go:195] Run: openssl version
	I1216 06:13:49.698980 1600247 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:13:49.706869 1600247 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 06:13:49.714910 1600247 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:13:49.719790 1600247 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:13:49.719856 1600247 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:13:49.762272 1600247 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 06:13:49.769716 1600247 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 06:13:49.777000 1600247 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 06:13:49.780433 1600247 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 06:13:49.780557 1600247 kubeadm.go:401] StartCluster: {Name:addons-142606 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-142606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:13:49.780638 1600247 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 06:13:49.780700 1600247 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 06:13:49.807769 1600247 cri.go:89] found id: ""
	I1216 06:13:49.807864 1600247 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 06:13:49.816131 1600247 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 06:13:49.824399 1600247 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 06:13:49.824486 1600247 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 06:13:49.832619 1600247 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 06:13:49.832638 1600247 kubeadm.go:158] found existing configuration files:
	
	I1216 06:13:49.832714 1600247 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 06:13:49.840939 1600247 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 06:13:49.841006 1600247 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 06:13:49.848609 1600247 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 06:13:49.856583 1600247 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 06:13:49.856680 1600247 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 06:13:49.864291 1600247 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 06:13:49.872192 1600247 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 06:13:49.872290 1600247 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 06:13:49.879912 1600247 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 06:13:49.887557 1600247 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 06:13:49.887674 1600247 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 06:13:49.895293 1600247 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 06:13:49.934678 1600247 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 06:13:49.935046 1600247 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:13:49.956488 1600247 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 06:13:49.956561 1600247 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 06:13:49.956596 1600247 kubeadm.go:319] OS: Linux
	I1216 06:13:49.956644 1600247 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 06:13:49.956697 1600247 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 06:13:49.956747 1600247 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 06:13:49.956797 1600247 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 06:13:49.956846 1600247 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 06:13:49.956895 1600247 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 06:13:49.956942 1600247 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 06:13:49.956991 1600247 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 06:13:49.957038 1600247 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 06:13:50.027355 1600247 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:13:50.027469 1600247 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:13:50.027561 1600247 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:13:50.039891 1600247 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:13:50.046635 1600247 out.go:252]   - Generating certificates and keys ...
	I1216 06:13:50.046807 1600247 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:13:50.046925 1600247 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:13:50.553441 1600247 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 06:13:51.593573 1600247 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 06:13:52.168632 1600247 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 06:13:52.572038 1600247 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 06:13:52.823274 1600247 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 06:13:52.823646 1600247 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-142606 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1216 06:13:53.693194 1600247 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 06:13:53.693532 1600247 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-142606 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1216 06:13:53.911743 1600247 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 06:13:54.464917 1600247 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 06:13:55.050299 1600247 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 06:13:55.050584 1600247 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:13:55.684882 1600247 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:13:56.120383 1600247 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:13:56.602480 1600247 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:13:56.723968 1600247 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:13:58.113130 1600247 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:13:58.114016 1600247 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:13:58.116868 1600247 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:13:58.120366 1600247 out.go:252]   - Booting up control plane ...
	I1216 06:13:58.120496 1600247 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:13:58.120578 1600247 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:13:58.120647 1600247 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:13:58.137090 1600247 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:13:58.137439 1600247 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:13:58.146799 1600247 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:13:58.146906 1600247 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:13:58.146946 1600247 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:13:58.280630 1600247 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:13:58.280745 1600247 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:13:59.281274 1600247 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000916313s
	I1216 06:13:59.285063 1600247 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 06:13:59.285159 1600247 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1216 06:13:59.285248 1600247 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 06:13:59.285326 1600247 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 06:14:01.964908 1600247 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.679192202s
	I1216 06:14:04.877378 1600247 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.592266636s
	I1216 06:14:05.287209 1600247 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001908453s
	I1216 06:14:05.320537 1600247 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 06:14:05.341173 1600247 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 06:14:05.365290 1600247 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 06:14:05.365729 1600247 kubeadm.go:319] [mark-control-plane] Marking the node addons-142606 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 06:14:05.379530 1600247 kubeadm.go:319] [bootstrap-token] Using token: zj5b5t.39n0uh0y5cilprjm
	I1216 06:14:05.384932 1600247 out.go:252]   - Configuring RBAC rules ...
	I1216 06:14:05.385080 1600247 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 06:14:05.387990 1600247 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 06:14:05.398678 1600247 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 06:14:05.403199 1600247 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 06:14:05.408880 1600247 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 06:14:05.413243 1600247 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 06:14:05.695428 1600247 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 06:14:06.124144 1600247 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 06:14:06.694779 1600247 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 06:14:06.696264 1600247 kubeadm.go:319] 
	I1216 06:14:06.696339 1600247 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 06:14:06.696345 1600247 kubeadm.go:319] 
	I1216 06:14:06.696422 1600247 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 06:14:06.696427 1600247 kubeadm.go:319] 
	I1216 06:14:06.696452 1600247 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 06:14:06.696561 1600247 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 06:14:06.696614 1600247 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 06:14:06.696619 1600247 kubeadm.go:319] 
	I1216 06:14:06.696673 1600247 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 06:14:06.696676 1600247 kubeadm.go:319] 
	I1216 06:14:06.696724 1600247 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 06:14:06.696728 1600247 kubeadm.go:319] 
	I1216 06:14:06.696780 1600247 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 06:14:06.696855 1600247 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 06:14:06.696924 1600247 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 06:14:06.696930 1600247 kubeadm.go:319] 
	I1216 06:14:06.697014 1600247 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 06:14:06.697091 1600247 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 06:14:06.697095 1600247 kubeadm.go:319] 
	I1216 06:14:06.697186 1600247 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token zj5b5t.39n0uh0y5cilprjm \
	I1216 06:14:06.697291 1600247 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:98b5016a2f19357bbe076308b3bd53072319152b21d9550fc4ffc6d799a06c05 \
	I1216 06:14:06.697311 1600247 kubeadm.go:319] 	--control-plane 
	I1216 06:14:06.697315 1600247 kubeadm.go:319] 
	I1216 06:14:06.697399 1600247 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 06:14:06.697403 1600247 kubeadm.go:319] 
	I1216 06:14:06.697485 1600247 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token zj5b5t.39n0uh0y5cilprjm \
	I1216 06:14:06.697587 1600247 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:98b5016a2f19357bbe076308b3bd53072319152b21d9550fc4ffc6d799a06c05 
	I1216 06:14:06.700301 1600247 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1216 06:14:06.700578 1600247 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 06:14:06.700684 1600247 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 06:14:06.700734 1600247 cni.go:84] Creating CNI manager for ""
	I1216 06:14:06.700748 1600247 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 06:14:06.703777 1600247 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1216 06:14:06.706713 1600247 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1216 06:14:06.710862 1600247 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1216 06:14:06.710884 1600247 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1216 06:14:06.724277 1600247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 06:14:07.030599 1600247 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 06:14:07.030732 1600247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:14:07.030830 1600247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-142606 minikube.k8s.io/updated_at=2025_12_16T06_14_07_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f minikube.k8s.io/name=addons-142606 minikube.k8s.io/primary=true
	I1216 06:14:07.169460 1600247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:14:07.169526 1600247 ops.go:34] apiserver oom_adj: -16
	I1216 06:14:07.670572 1600247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:14:08.170524 1600247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:14:08.670340 1600247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:14:09.169589 1600247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:14:09.670377 1600247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:14:10.169579 1600247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:14:10.670020 1600247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:14:10.761367 1600247 kubeadm.go:1114] duration metric: took 3.730678413s to wait for elevateKubeSystemPrivileges
	I1216 06:14:10.761396 1600247 kubeadm.go:403] duration metric: took 20.980843269s to StartCluster
	I1216 06:14:10.761414 1600247 settings.go:142] acquiring lock: {Name:mk011eec7aa10b3db81dce3dc7edf51f985e2ce2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:14:10.761548 1600247 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:14:10.761949 1600247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/kubeconfig: {Name:mk61a8e87d869d27c5acc78145bae6b02a8088a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:14:10.762137 1600247 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 06:14:10.762162 1600247 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 06:14:10.762395 1600247 config.go:182] Loaded profile config "addons-142606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 06:14:10.762435 1600247 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1216 06:14:10.762527 1600247 addons.go:70] Setting yakd=true in profile "addons-142606"
	I1216 06:14:10.762547 1600247 addons.go:239] Setting addon yakd=true in "addons-142606"
	I1216 06:14:10.762570 1600247 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:14:10.763040 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:14:10.763186 1600247 addons.go:70] Setting inspektor-gadget=true in profile "addons-142606"
	I1216 06:14:10.763202 1600247 addons.go:239] Setting addon inspektor-gadget=true in "addons-142606"
	I1216 06:14:10.763220 1600247 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:14:10.763609 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:14:10.763915 1600247 addons.go:70] Setting metrics-server=true in profile "addons-142606"
	I1216 06:14:10.763934 1600247 addons.go:239] Setting addon metrics-server=true in "addons-142606"
	I1216 06:14:10.763957 1600247 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:14:10.764370 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:14:10.768301 1600247 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-142606"
	I1216 06:14:10.768374 1600247 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-142606"
	I1216 06:14:10.768486 1600247 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:14:10.768802 1600247 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-142606"
	I1216 06:14:10.768818 1600247 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-142606"
	I1216 06:14:10.768859 1600247 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:14:10.769419 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:14:10.770026 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:14:10.770619 1600247 addons.go:70] Setting cloud-spanner=true in profile "addons-142606"
	I1216 06:14:10.770642 1600247 addons.go:239] Setting addon cloud-spanner=true in "addons-142606"
	I1216 06:14:10.770670 1600247 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:14:10.771099 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:14:10.775657 1600247 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-142606"
	I1216 06:14:10.775731 1600247 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-142606"
	I1216 06:14:10.775760 1600247 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:14:10.776244 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:14:10.783877 1600247 addons.go:70] Setting default-storageclass=true in profile "addons-142606"
	I1216 06:14:10.783921 1600247 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-142606"
	I1216 06:14:10.783970 1600247 addons.go:70] Setting registry=true in profile "addons-142606"
	I1216 06:14:10.784033 1600247 addons.go:239] Setting addon registry=true in "addons-142606"
	I1216 06:14:10.784192 1600247 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:14:10.784279 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:14:10.785928 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:14:10.800378 1600247 addons.go:70] Setting gcp-auth=true in profile "addons-142606"
	I1216 06:14:10.800422 1600247 mustload.go:66] Loading cluster: addons-142606
	I1216 06:14:10.800647 1600247 config.go:182] Loaded profile config "addons-142606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 06:14:10.800923 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:14:10.810307 1600247 addons.go:70] Setting registry-creds=true in profile "addons-142606"
	I1216 06:14:10.810335 1600247 addons.go:239] Setting addon registry-creds=true in "addons-142606"
	I1216 06:14:10.810371 1600247 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:14:10.810853 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:14:10.820107 1600247 addons.go:70] Setting ingress=true in profile "addons-142606"
	I1216 06:14:10.820153 1600247 addons.go:239] Setting addon ingress=true in "addons-142606"
	I1216 06:14:10.820202 1600247 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:14:10.820866 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:14:10.829935 1600247 addons.go:70] Setting storage-provisioner=true in profile "addons-142606"
	I1216 06:14:10.830092 1600247 addons.go:239] Setting addon storage-provisioner=true in "addons-142606"
	I1216 06:14:10.830151 1600247 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:14:10.831530 1600247 addons.go:70] Setting ingress-dns=true in profile "addons-142606"
	I1216 06:14:10.831559 1600247 addons.go:239] Setting addon ingress-dns=true in "addons-142606"
	I1216 06:14:10.831595 1600247 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:14:10.832040 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:14:10.836134 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:14:10.847774 1600247 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-142606"
	I1216 06:14:10.847853 1600247 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-142606"
	I1216 06:14:10.848045 1600247 out.go:179] * Verifying Kubernetes components...
	I1216 06:14:10.848323 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:14:10.851278 1600247 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:14:10.872675 1600247 addons.go:70] Setting volcano=true in profile "addons-142606"
	I1216 06:14:10.872752 1600247 addons.go:239] Setting addon volcano=true in "addons-142606"
	I1216 06:14:10.872805 1600247 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:14:10.873318 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:14:10.892690 1600247 addons.go:70] Setting volumesnapshots=true in profile "addons-142606"
	I1216 06:14:10.892765 1600247 addons.go:239] Setting addon volumesnapshots=true in "addons-142606"
	I1216 06:14:10.892820 1600247 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:14:10.893331 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:14:10.913344 1600247 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1216 06:14:10.921288 1600247 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1216 06:14:10.921317 1600247 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1216 06:14:10.921391 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:14:10.943796 1600247 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1216 06:14:10.945025 1600247 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1216 06:14:10.981097 1600247 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1216 06:14:11.033906 1600247 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1216 06:14:11.034087 1600247 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1216 06:14:11.002319 1600247 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1216 06:14:11.041280 1600247 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1216 06:14:11.041383 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:14:11.041602 1600247 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1216 06:14:11.041614 1600247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1216 06:14:11.041667 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:14:11.054919 1600247 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1216 06:14:11.055000 1600247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1216 06:14:11.055131 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:14:11.003913 1600247 addons.go:239] Setting addon default-storageclass=true in "addons-142606"
	I1216 06:14:11.059316 1600247 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:14:11.059843 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:14:11.081450 1600247 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1216 06:14:11.085776 1600247 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1216 06:14:11.085863 1600247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1216 06:14:11.085965 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:14:11.003978 1600247 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1216 06:14:11.102938 1600247 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1216 06:14:11.102964 1600247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1216 06:14:11.103050 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:14:11.103458 1600247 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-142606"
	I1216 06:14:11.103498 1600247 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:14:11.103944 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:14:11.112014 1600247 host.go:66] Checking if "addons-142606" exists ...
	W1216 06:14:11.114437 1600247 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1216 06:14:11.114791 1600247 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1216 06:14:11.114804 1600247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1216 06:14:11.114859 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:14:11.122635 1600247 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1216 06:14:11.122799 1600247 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1216 06:14:11.122855 1600247 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1216 06:14:11.134137 1600247 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1216 06:14:11.137030 1600247 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1216 06:14:11.143606 1600247 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1216 06:14:11.146664 1600247 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1216 06:14:11.149667 1600247 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1216 06:14:11.149695 1600247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1216 06:14:11.149769 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:14:11.123518 1600247 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 06:14:11.123524 1600247 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1216 06:14:11.130550 1600247 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1216 06:14:11.171485 1600247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1216 06:14:11.171575 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:14:11.183864 1600247 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1216 06:14:11.184139 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:14:11.184983 1600247 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1216 06:14:11.185554 1600247 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:14:11.185577 1600247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 06:14:11.185640 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:14:11.209415 1600247 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1216 06:14:11.209528 1600247 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1216 06:14:11.209705 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:14:11.229773 1600247 out.go:179]   - Using image docker.io/registry:3.0.0
	I1216 06:14:11.236610 1600247 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1216 06:14:11.238546 1600247 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1216 06:14:11.238568 1600247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1216 06:14:11.238633 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:14:11.246052 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:14:11.247346 1600247 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1216 06:14:11.252602 1600247 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1216 06:14:11.257483 1600247 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1216 06:14:11.257526 1600247 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1216 06:14:11.257607 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:14:11.292759 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:14:11.323640 1600247 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 06:14:11.323661 1600247 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 06:14:11.323722 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:14:11.356873 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:14:11.360613 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:14:11.361666 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:14:11.381845 1600247 out.go:179]   - Using image docker.io/busybox:stable
	I1216 06:14:11.382015 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:14:11.391966 1600247 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1216 06:14:11.392628 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:14:11.400404 1600247 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1216 06:14:11.400429 1600247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1216 06:14:11.400613 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:14:11.418750 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:14:11.445398 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:14:11.468520 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:14:11.469282 1600247 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:14:11.469581 1600247 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 06:14:11.488774 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:14:11.488818 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:14:11.491674 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:14:11.498995 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	W1216 06:14:11.500228 1600247 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1216 06:14:11.500258 1600247 retry.go:31] will retry after 339.752207ms: ssh: handshake failed: EOF
	I1216 06:14:11.717729 1600247 node_ready.go:35] waiting up to 6m0s for node "addons-142606" to be "Ready" ...
	I1216 06:14:11.721299 1600247 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1216 06:14:11.721374 1600247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1216 06:14:11.913775 1600247 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1216 06:14:11.913857 1600247 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1216 06:14:11.971101 1600247 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1216 06:14:11.971124 1600247 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1216 06:14:11.972260 1600247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1216 06:14:12.086634 1600247 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 06:14:12.086657 1600247 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1216 06:14:12.184589 1600247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 06:14:12.186852 1600247 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1216 06:14:12.186916 1600247 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1216 06:14:12.202969 1600247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1216 06:14:12.260848 1600247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1216 06:14:12.270739 1600247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:14:12.292636 1600247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1216 06:14:12.293397 1600247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1216 06:14:12.329882 1600247 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1216 06:14:12.329908 1600247 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1216 06:14:12.341706 1600247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1216 06:14:12.380841 1600247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1216 06:14:12.390499 1600247 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1216 06:14:12.390521 1600247 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1216 06:14:12.402160 1600247 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1216 06:14:12.402185 1600247 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1216 06:14:12.418574 1600247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1216 06:14:12.450876 1600247 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1216 06:14:12.450903 1600247 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1216 06:14:12.512900 1600247 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1216 06:14:12.512927 1600247 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1216 06:14:12.630953 1600247 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1216 06:14:12.630978 1600247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1216 06:14:12.666324 1600247 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1216 06:14:12.666350 1600247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1216 06:14:12.667346 1600247 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1216 06:14:12.667371 1600247 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1216 06:14:12.684817 1600247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:14:12.695073 1600247 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1216 06:14:12.695142 1600247 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1216 06:14:12.738986 1600247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1216 06:14:12.939944 1600247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1216 06:14:12.969488 1600247 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1216 06:14:12.969510 1600247 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1216 06:14:12.994138 1600247 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1216 06:14:12.994160 1600247 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1216 06:14:13.124038 1600247 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1216 06:14:13.124107 1600247 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1216 06:14:13.163228 1600247 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1216 06:14:13.163296 1600247 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1216 06:14:13.361572 1600247 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 06:14:13.361641 1600247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1216 06:14:13.400285 1600247 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1216 06:14:13.400354 1600247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1216 06:14:13.585335 1600247 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1216 06:14:13.585411 1600247 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1216 06:14:13.588014 1600247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1216 06:14:13.753297 1600247 node_ready.go:57] node "addons-142606" has "Ready":"False" status (will retry)
	I1216 06:14:13.773467 1600247 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.303859159s)
	I1216 06:14:13.773547 1600247 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1216 06:14:13.773671 1600247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.801386597s)
	I1216 06:14:13.838945 1600247 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1216 06:14:13.839016 1600247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1216 06:14:14.159182 1600247 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1216 06:14:14.159255 1600247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1216 06:14:14.278097 1600247 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-142606" context rescaled to 1 replicas
	I1216 06:14:14.463904 1600247 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1216 06:14:14.463925 1600247 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1216 06:14:14.605637 1600247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1216 06:14:15.761631 1600247 node_ready.go:57] node "addons-142606" has "Ready":"False" status (will retry)
	I1216 06:14:16.433351 1600247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.230292065s)
	I1216 06:14:16.433449 1600247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.172577488s)
	I1216 06:14:16.433506 1600247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.162743054s)
	I1216 06:14:16.433549 1600247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.140132969s)
	I1216 06:14:16.433772 1600247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.141111573s)
	I1216 06:14:16.433820 1600247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.09209193s)
	I1216 06:14:16.433855 1600247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.05299121s)
	I1216 06:14:16.433976 1600247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.2492991s)
	I1216 06:14:16.434004 1600247 addons.go:495] Verifying addon metrics-server=true in "addons-142606"
	I1216 06:14:17.222321 1600247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.803694511s)
	I1216 06:14:17.222663 1600247 addons.go:495] Verifying addon ingress=true in "addons-142606"
	I1216 06:14:17.222431 1600247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.537538911s)
	I1216 06:14:17.222465 1600247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.483408831s)
	I1216 06:14:17.222488 1600247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.282475719s)
	I1216 06:14:17.223237 1600247 addons.go:495] Verifying addon registry=true in "addons-142606"
	I1216 06:14:17.222556 1600247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.634476144s)
	W1216 06:14:17.223455 1600247 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1216 06:14:17.223471 1600247 retry.go:31] will retry after 268.785662ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1216 06:14:17.226426 1600247 out.go:179] * Verifying registry addon...
	I1216 06:14:17.226433 1600247 out.go:179] * Verifying ingress addon...
	I1216 06:14:17.226591 1600247 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-142606 service yakd-dashboard -n yakd-dashboard
	
	I1216 06:14:17.231010 1600247 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1216 06:14:17.231010 1600247 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1216 06:14:17.238244 1600247 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1216 06:14:17.238415 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:17.238560 1600247 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1216 06:14:17.238590 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:17.493258 1600247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 06:14:17.546926 1600247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.941231546s)
	I1216 06:14:17.546966 1600247 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-142606"
	I1216 06:14:17.549944 1600247 out.go:179] * Verifying csi-hostpath-driver addon...
	I1216 06:14:17.554364 1600247 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1216 06:14:17.567454 1600247 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1216 06:14:17.567522 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:17.737282 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:17.737605 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:18.058570 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 06:14:18.220810 1600247 node_ready.go:57] node "addons-142606" has "Ready":"False" status (will retry)
	I1216 06:14:18.235182 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:18.235497 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:18.558829 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:18.722808 1600247 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1216 06:14:18.722964 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:14:18.735012 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:18.735087 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:18.745764 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:14:18.873604 1600247 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1216 06:14:18.886315 1600247 addons.go:239] Setting addon gcp-auth=true in "addons-142606"
	I1216 06:14:18.886363 1600247 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:14:18.886829 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:14:18.903740 1600247 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1216 06:14:18.903795 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:14:18.920632 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:14:19.057643 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:19.234858 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:19.235074 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:19.558396 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:19.734717 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:19.734891 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:20.058618 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 06:14:20.222930 1600247 node_ready.go:57] node "addons-142606" has "Ready":"False" status (will retry)
	I1216 06:14:20.236940 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:20.237182 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:20.244520 1600247 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.340744757s)
	I1216 06:14:20.244742 1600247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.751399321s)
	I1216 06:14:20.247616 1600247 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1216 06:14:20.250390 1600247 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1216 06:14:20.253333 1600247 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1216 06:14:20.253359 1600247 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1216 06:14:20.266659 1600247 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1216 06:14:20.266722 1600247 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1216 06:14:20.280988 1600247 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1216 06:14:20.281013 1600247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1216 06:14:20.294426 1600247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1216 06:14:20.558302 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:20.736375 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:20.737627 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:20.830885 1600247 addons.go:495] Verifying addon gcp-auth=true in "addons-142606"
	I1216 06:14:20.833977 1600247 out.go:179] * Verifying gcp-auth addon...
	I1216 06:14:20.836808 1600247 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1216 06:14:20.845517 1600247 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1216 06:14:20.845582 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:21.058091 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:21.234447 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:21.234828 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:21.340534 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:21.558305 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:21.734523 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:21.734917 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:21.839692 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:22.057730 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:22.235290 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:22.235555 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:22.340340 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:22.557824 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 06:14:22.720850 1600247 node_ready.go:57] node "addons-142606" has "Ready":"False" status (will retry)
	I1216 06:14:22.735489 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:22.735899 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:22.839998 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:23.057939 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:23.234983 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:23.235221 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:23.339798 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:23.558228 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:23.735804 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:23.735855 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:23.840234 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:24.057483 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:24.235058 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:24.235164 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:24.340359 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:24.558004 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 06:14:24.721015 1600247 node_ready.go:57] node "addons-142606" has "Ready":"False" status (will retry)
	I1216 06:14:24.734068 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:24.734195 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:24.840066 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:25.057957 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:25.235012 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:25.235198 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:25.340045 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:25.557880 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:25.734697 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:25.734815 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:25.839616 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:26.057858 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:26.235916 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:26.236609 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:26.340506 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:26.557988 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 06:14:26.721097 1600247 node_ready.go:57] node "addons-142606" has "Ready":"False" status (will retry)
	I1216 06:14:26.734410 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:26.734512 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:26.840289 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:27.057310 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:27.234397 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:27.234517 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:27.340551 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:27.557600 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:27.734339 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:27.734451 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:27.841314 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:28.058078 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:28.234653 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:28.234940 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:28.340290 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:28.557238 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:28.734813 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:28.735243 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:28.839970 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:29.057870 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 06:14:29.220358 1600247 node_ready.go:57] node "addons-142606" has "Ready":"False" status (will retry)
	I1216 06:14:29.234110 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:29.234650 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:29.340148 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:29.557006 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:29.734282 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:29.734469 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:29.840454 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:30.073213 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:30.235513 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:30.235708 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:30.340495 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:30.557583 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:30.735183 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:30.735281 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:30.840255 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:31.057535 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 06:14:31.221431 1600247 node_ready.go:57] node "addons-142606" has "Ready":"False" status (will retry)
	I1216 06:14:31.234753 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:31.234956 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:31.339828 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:31.557757 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:31.735117 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:31.735515 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:31.840538 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:32.057409 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:32.234832 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:32.234852 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:32.340736 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:32.557438 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:32.734569 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:32.736396 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:32.840718 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:33.057713 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:33.234680 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:33.235118 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:33.339989 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:33.558524 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 06:14:33.721252 1600247 node_ready.go:57] node "addons-142606" has "Ready":"False" status (will retry)
	I1216 06:14:33.734453 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:33.735048 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:33.839959 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:34.058124 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:34.234863 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:34.235687 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:34.339314 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:34.557393 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:34.734534 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:34.734675 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:34.840750 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:35.058407 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:35.235839 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:35.236096 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:35.340062 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:35.557877 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:35.733979 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:35.734127 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:35.839930 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:36.057915 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 06:14:36.220802 1600247 node_ready.go:57] node "addons-142606" has "Ready":"False" status (will retry)
	I1216 06:14:36.242562 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:36.242795 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:36.340073 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:36.558326 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:36.734754 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:36.735065 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:36.839947 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:37.058120 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:37.235900 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:37.236265 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:37.340028 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:37.558543 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:37.734928 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:37.735293 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:37.840093 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:38.058954 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:38.234687 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:38.235316 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:38.340424 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:38.557338 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 06:14:38.721410 1600247 node_ready.go:57] node "addons-142606" has "Ready":"False" status (will retry)
	I1216 06:14:38.734308 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:38.734796 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:38.840586 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:39.057898 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:39.234910 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:39.235202 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:39.340353 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:39.557232 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:39.734303 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:39.734563 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:39.840452 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:40.057920 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:40.235528 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:40.235742 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:40.342221 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:40.557205 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:40.734854 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:40.735232 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:40.840222 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:41.057741 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 06:14:41.221587 1600247 node_ready.go:57] node "addons-142606" has "Ready":"False" status (will retry)
	I1216 06:14:41.235115 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:41.235684 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:41.340604 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:41.558020 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:41.735895 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:41.736022 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:41.840185 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:42.058135 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:42.234903 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:42.235028 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:42.339952 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:42.558594 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:42.734491 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:42.734739 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:42.840604 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:43.058029 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:43.234597 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:43.234696 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:43.340607 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:43.560287 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 06:14:43.721099 1600247 node_ready.go:57] node "addons-142606" has "Ready":"False" status (will retry)
	I1216 06:14:43.734244 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:43.734582 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:43.840450 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:44.058236 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:44.235062 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:44.235425 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:44.340380 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:44.557746 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:44.735582 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:44.736076 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:44.839822 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:45.063774 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:45.241664 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:45.243711 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:45.340053 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:45.557906 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 06:14:45.721532 1600247 node_ready.go:57] node "addons-142606" has "Ready":"False" status (will retry)
	I1216 06:14:45.734614 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:45.734736 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:45.840706 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:46.057939 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:46.233948 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:46.235141 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:46.340124 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:46.558603 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:46.734736 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:46.734935 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:46.839985 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:47.058425 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:47.234500 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:47.234630 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:47.340712 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:47.558090 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 06:14:47.721911 1600247 node_ready.go:57] node "addons-142606" has "Ready":"False" status (will retry)
	I1216 06:14:47.736778 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:47.737247 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:47.840268 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:48.057457 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:48.236170 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:48.236316 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:48.339721 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:48.558042 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:48.734656 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:48.734802 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:48.839777 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:49.057591 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:49.234611 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:49.236604 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:49.340444 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:49.557301 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:49.735002 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:49.735326 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:49.840185 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:50.057398 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 06:14:50.221122 1600247 node_ready.go:57] node "addons-142606" has "Ready":"False" status (will retry)
	I1216 06:14:50.235573 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:50.235823 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:50.340556 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:50.557165 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:50.734011 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:50.734494 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:50.840319 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:51.058145 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:51.236568 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:51.236837 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:51.340578 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:51.557774 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:51.734859 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:51.735485 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:51.840312 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:52.057961 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:52.234531 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:52.234962 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:52.339762 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:52.557937 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:52.749258 1600247 node_ready.go:49] node "addons-142606" is "Ready"
	I1216 06:14:52.749286 1600247 node_ready.go:38] duration metric: took 41.031474138s for node "addons-142606" to be "Ready" ...
	I1216 06:14:52.749301 1600247 api_server.go:52] waiting for apiserver process to appear ...
	I1216 06:14:52.749359 1600247 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:14:52.768591 1600247 api_server.go:72] duration metric: took 42.006401302s to wait for apiserver process to appear ...
	I1216 06:14:52.768685 1600247 api_server.go:88] waiting for apiserver healthz status ...
	I1216 06:14:52.768720 1600247 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 06:14:52.797624 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:52.799061 1600247 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1216 06:14:52.799122 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:52.827602 1600247 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1216 06:14:52.843955 1600247 api_server.go:141] control plane version: v1.34.2
	I1216 06:14:52.844036 1600247 api_server.go:131] duration metric: took 75.329629ms to wait for apiserver health ...
	I1216 06:14:52.844061 1600247 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 06:14:52.865849 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:52.867010 1600247 system_pods.go:59] 19 kube-system pods found
	I1216 06:14:52.867090 1600247 system_pods.go:61] "coredns-66bc5c9577-hzh7x" [4532cfd3-202b-404f-98e9-88793a3557e5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:14:52.867114 1600247 system_pods.go:61] "csi-hostpath-attacher-0" [16a76d66-88e5-4dc9-9fe6-b62f1b24b2be] Pending
	I1216 06:14:52.867153 1600247 system_pods.go:61] "csi-hostpath-resizer-0" [25bae709-4f76-4545-8659-71e1ae91246f] Pending
	I1216 06:14:52.867178 1600247 system_pods.go:61] "csi-hostpathplugin-ds9r9" [d04f9e6e-41b7-4f6c-86f1-17c1472977bd] Pending
	I1216 06:14:52.867200 1600247 system_pods.go:61] "etcd-addons-142606" [ba28f010-7a70-408c-b883-ac04c6332042] Running
	I1216 06:14:52.867239 1600247 system_pods.go:61] "kindnet-t8fqq" [46de864f-3035-46c4-a475-f9d951f1d51c] Running
	I1216 06:14:52.867266 1600247 system_pods.go:61] "kube-apiserver-addons-142606" [40b526a7-7c2d-46af-99b6-30b1e314240c] Running
	I1216 06:14:52.867288 1600247 system_pods.go:61] "kube-controller-manager-addons-142606" [06aa3b93-c353-45ad-9f5d-41111af2811b] Running
	I1216 06:14:52.867327 1600247 system_pods.go:61] "kube-ingress-dns-minikube" [95e2cd7e-efb8-4796-a7df-017a5f674494] Pending
	I1216 06:14:52.867353 1600247 system_pods.go:61] "kube-proxy-g5n5p" [68410934-d306-4372-b4f9-6b768fa1f3a1] Running
	I1216 06:14:52.867373 1600247 system_pods.go:61] "kube-scheduler-addons-142606" [9239a891-6dc3-4a9a-bc11-ec5c1ad9d174] Running
	I1216 06:14:52.867420 1600247 system_pods.go:61] "metrics-server-85b7d694d7-t6mbs" [8d8eeda6-d813-458b-807c-e88c9a1a0462] Pending
	I1216 06:14:52.867442 1600247 system_pods.go:61] "nvidia-device-plugin-daemonset-w4pvk" [bbe25eae-b43b-4904-bd63-a070c971d855] Pending
	I1216 06:14:52.867462 1600247 system_pods.go:61] "registry-6b586f9694-prj95" [839dbbf2-5df3-4b37-903a-7afbee677045] Pending
	I1216 06:14:52.867502 1600247 system_pods.go:61] "registry-creds-764b6fb674-8vxwt" [42db279c-e8af-4665-a64c-91e4804c2b00] Pending
	I1216 06:14:52.867525 1600247 system_pods.go:61] "registry-proxy-qh7wq" [bc3adfb9-13a5-45c6-9047-8a229431de86] Pending
	I1216 06:14:52.867549 1600247 system_pods.go:61] "snapshot-controller-7d9fbc56b8-ljkmz" [eee045ce-7688-4414-b1bc-d826f5823500] Pending
	I1216 06:14:52.867584 1600247 system_pods.go:61] "snapshot-controller-7d9fbc56b8-ntp5r" [1c150a9c-ecf0-4b0a-915f-2c11086617cb] Pending
	I1216 06:14:52.867607 1600247 system_pods.go:61] "storage-provisioner" [f7eb6858-f3fe-49df-9b50-ed902bf3de6f] Pending
	I1216 06:14:52.867631 1600247 system_pods.go:74] duration metric: took 23.549064ms to wait for pod list to return data ...
	I1216 06:14:52.867666 1600247 default_sa.go:34] waiting for default service account to be created ...
	I1216 06:14:52.889075 1600247 default_sa.go:45] found service account: "default"
	I1216 06:14:52.889155 1600247 default_sa.go:55] duration metric: took 21.464603ms for default service account to be created ...
	I1216 06:14:52.889181 1600247 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 06:14:52.900518 1600247 system_pods.go:86] 19 kube-system pods found
	I1216 06:14:52.900603 1600247 system_pods.go:89] "coredns-66bc5c9577-hzh7x" [4532cfd3-202b-404f-98e9-88793a3557e5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:14:52.900625 1600247 system_pods.go:89] "csi-hostpath-attacher-0" [16a76d66-88e5-4dc9-9fe6-b62f1b24b2be] Pending
	I1216 06:14:52.900645 1600247 system_pods.go:89] "csi-hostpath-resizer-0" [25bae709-4f76-4545-8659-71e1ae91246f] Pending
	I1216 06:14:52.900677 1600247 system_pods.go:89] "csi-hostpathplugin-ds9r9" [d04f9e6e-41b7-4f6c-86f1-17c1472977bd] Pending
	I1216 06:14:52.900700 1600247 system_pods.go:89] "etcd-addons-142606" [ba28f010-7a70-408c-b883-ac04c6332042] Running
	I1216 06:14:52.900719 1600247 system_pods.go:89] "kindnet-t8fqq" [46de864f-3035-46c4-a475-f9d951f1d51c] Running
	I1216 06:14:52.900757 1600247 system_pods.go:89] "kube-apiserver-addons-142606" [40b526a7-7c2d-46af-99b6-30b1e314240c] Running
	I1216 06:14:52.900785 1600247 system_pods.go:89] "kube-controller-manager-addons-142606" [06aa3b93-c353-45ad-9f5d-41111af2811b] Running
	I1216 06:14:52.900812 1600247 system_pods.go:89] "kube-ingress-dns-minikube" [95e2cd7e-efb8-4796-a7df-017a5f674494] Pending
	I1216 06:14:52.900850 1600247 system_pods.go:89] "kube-proxy-g5n5p" [68410934-d306-4372-b4f9-6b768fa1f3a1] Running
	I1216 06:14:52.900874 1600247 system_pods.go:89] "kube-scheduler-addons-142606" [9239a891-6dc3-4a9a-bc11-ec5c1ad9d174] Running
	I1216 06:14:52.900898 1600247 system_pods.go:89] "metrics-server-85b7d694d7-t6mbs" [8d8eeda6-d813-458b-807c-e88c9a1a0462] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 06:14:52.900933 1600247 system_pods.go:89] "nvidia-device-plugin-daemonset-w4pvk" [bbe25eae-b43b-4904-bd63-a070c971d855] Pending
	I1216 06:14:52.900957 1600247 system_pods.go:89] "registry-6b586f9694-prj95" [839dbbf2-5df3-4b37-903a-7afbee677045] Pending
	I1216 06:14:52.900976 1600247 system_pods.go:89] "registry-creds-764b6fb674-8vxwt" [42db279c-e8af-4665-a64c-91e4804c2b00] Pending
	I1216 06:14:52.901013 1600247 system_pods.go:89] "registry-proxy-qh7wq" [bc3adfb9-13a5-45c6-9047-8a229431de86] Pending
	I1216 06:14:52.901050 1600247 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ljkmz" [eee045ce-7688-4414-b1bc-d826f5823500] Pending
	I1216 06:14:52.901070 1600247 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ntp5r" [1c150a9c-ecf0-4b0a-915f-2c11086617cb] Pending
	I1216 06:14:52.901105 1600247 system_pods.go:89] "storage-provisioner" [f7eb6858-f3fe-49df-9b50-ed902bf3de6f] Pending
	I1216 06:14:52.901146 1600247 retry.go:31] will retry after 218.919549ms: missing components: kube-dns
	I1216 06:14:53.125650 1600247 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1216 06:14:53.125724 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:53.138444 1600247 system_pods.go:86] 19 kube-system pods found
	I1216 06:14:53.138534 1600247 system_pods.go:89] "coredns-66bc5c9577-hzh7x" [4532cfd3-202b-404f-98e9-88793a3557e5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:14:53.138557 1600247 system_pods.go:89] "csi-hostpath-attacher-0" [16a76d66-88e5-4dc9-9fe6-b62f1b24b2be] Pending
	I1216 06:14:53.138600 1600247 system_pods.go:89] "csi-hostpath-resizer-0" [25bae709-4f76-4545-8659-71e1ae91246f] Pending
	I1216 06:14:53.138626 1600247 system_pods.go:89] "csi-hostpathplugin-ds9r9" [d04f9e6e-41b7-4f6c-86f1-17c1472977bd] Pending
	I1216 06:14:53.138648 1600247 system_pods.go:89] "etcd-addons-142606" [ba28f010-7a70-408c-b883-ac04c6332042] Running
	I1216 06:14:53.138687 1600247 system_pods.go:89] "kindnet-t8fqq" [46de864f-3035-46c4-a475-f9d951f1d51c] Running
	I1216 06:14:53.138712 1600247 system_pods.go:89] "kube-apiserver-addons-142606" [40b526a7-7c2d-46af-99b6-30b1e314240c] Running
	I1216 06:14:53.138737 1600247 system_pods.go:89] "kube-controller-manager-addons-142606" [06aa3b93-c353-45ad-9f5d-41111af2811b] Running
	I1216 06:14:53.138777 1600247 system_pods.go:89] "kube-ingress-dns-minikube" [95e2cd7e-efb8-4796-a7df-017a5f674494] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1216 06:14:53.138803 1600247 system_pods.go:89] "kube-proxy-g5n5p" [68410934-d306-4372-b4f9-6b768fa1f3a1] Running
	I1216 06:14:53.138825 1600247 system_pods.go:89] "kube-scheduler-addons-142606" [9239a891-6dc3-4a9a-bc11-ec5c1ad9d174] Running
	I1216 06:14:53.138862 1600247 system_pods.go:89] "metrics-server-85b7d694d7-t6mbs" [8d8eeda6-d813-458b-807c-e88c9a1a0462] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 06:14:53.138885 1600247 system_pods.go:89] "nvidia-device-plugin-daemonset-w4pvk" [bbe25eae-b43b-4904-bd63-a070c971d855] Pending
	I1216 06:14:53.138913 1600247 system_pods.go:89] "registry-6b586f9694-prj95" [839dbbf2-5df3-4b37-903a-7afbee677045] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 06:14:53.138952 1600247 system_pods.go:89] "registry-creds-764b6fb674-8vxwt" [42db279c-e8af-4665-a64c-91e4804c2b00] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1216 06:14:53.138978 1600247 system_pods.go:89] "registry-proxy-qh7wq" [bc3adfb9-13a5-45c6-9047-8a229431de86] Pending
	I1216 06:14:53.139001 1600247 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ljkmz" [eee045ce-7688-4414-b1bc-d826f5823500] Pending
	I1216 06:14:53.139035 1600247 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ntp5r" [1c150a9c-ecf0-4b0a-915f-2c11086617cb] Pending
	I1216 06:14:53.139060 1600247 system_pods.go:89] "storage-provisioner" [f7eb6858-f3fe-49df-9b50-ed902bf3de6f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:14:53.139094 1600247 retry.go:31] will retry after 309.512001ms: missing components: kube-dns
	I1216 06:14:53.266436 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:53.266582 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:53.359909 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:53.463214 1600247 system_pods.go:86] 19 kube-system pods found
	I1216 06:14:53.463250 1600247 system_pods.go:89] "coredns-66bc5c9577-hzh7x" [4532cfd3-202b-404f-98e9-88793a3557e5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:14:53.463258 1600247 system_pods.go:89] "csi-hostpath-attacher-0" [16a76d66-88e5-4dc9-9fe6-b62f1b24b2be] Pending
	I1216 06:14:53.463267 1600247 system_pods.go:89] "csi-hostpath-resizer-0" [25bae709-4f76-4545-8659-71e1ae91246f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1216 06:14:53.463272 1600247 system_pods.go:89] "csi-hostpathplugin-ds9r9" [d04f9e6e-41b7-4f6c-86f1-17c1472977bd] Pending
	I1216 06:14:53.463276 1600247 system_pods.go:89] "etcd-addons-142606" [ba28f010-7a70-408c-b883-ac04c6332042] Running
	I1216 06:14:53.463281 1600247 system_pods.go:89] "kindnet-t8fqq" [46de864f-3035-46c4-a475-f9d951f1d51c] Running
	I1216 06:14:53.463285 1600247 system_pods.go:89] "kube-apiserver-addons-142606" [40b526a7-7c2d-46af-99b6-30b1e314240c] Running
	I1216 06:14:53.463296 1600247 system_pods.go:89] "kube-controller-manager-addons-142606" [06aa3b93-c353-45ad-9f5d-41111af2811b] Running
	I1216 06:14:53.463303 1600247 system_pods.go:89] "kube-ingress-dns-minikube" [95e2cd7e-efb8-4796-a7df-017a5f674494] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1216 06:14:53.463315 1600247 system_pods.go:89] "kube-proxy-g5n5p" [68410934-d306-4372-b4f9-6b768fa1f3a1] Running
	I1216 06:14:53.463321 1600247 system_pods.go:89] "kube-scheduler-addons-142606" [9239a891-6dc3-4a9a-bc11-ec5c1ad9d174] Running
	I1216 06:14:53.463327 1600247 system_pods.go:89] "metrics-server-85b7d694d7-t6mbs" [8d8eeda6-d813-458b-807c-e88c9a1a0462] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 06:14:53.463334 1600247 system_pods.go:89] "nvidia-device-plugin-daemonset-w4pvk" [bbe25eae-b43b-4904-bd63-a070c971d855] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1216 06:14:53.463344 1600247 system_pods.go:89] "registry-6b586f9694-prj95" [839dbbf2-5df3-4b37-903a-7afbee677045] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 06:14:53.463349 1600247 system_pods.go:89] "registry-creds-764b6fb674-8vxwt" [42db279c-e8af-4665-a64c-91e4804c2b00] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1216 06:14:53.463356 1600247 system_pods.go:89] "registry-proxy-qh7wq" [bc3adfb9-13a5-45c6-9047-8a229431de86] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1216 06:14:53.463363 1600247 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ljkmz" [eee045ce-7688-4414-b1bc-d826f5823500] Pending
	I1216 06:14:53.463370 1600247 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ntp5r" [1c150a9c-ecf0-4b0a-915f-2c11086617cb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 06:14:53.463378 1600247 system_pods.go:89] "storage-provisioner" [f7eb6858-f3fe-49df-9b50-ed902bf3de6f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:14:53.463402 1600247 retry.go:31] will retry after 459.795537ms: missing components: kube-dns
	I1216 06:14:53.601116 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:53.735582 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:53.736127 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:53.843035 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:53.944938 1600247 system_pods.go:86] 19 kube-system pods found
	I1216 06:14:53.945022 1600247 system_pods.go:89] "coredns-66bc5c9577-hzh7x" [4532cfd3-202b-404f-98e9-88793a3557e5] Running
	I1216 06:14:53.945049 1600247 system_pods.go:89] "csi-hostpath-attacher-0" [16a76d66-88e5-4dc9-9fe6-b62f1b24b2be] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1216 06:14:53.945091 1600247 system_pods.go:89] "csi-hostpath-resizer-0" [25bae709-4f76-4545-8659-71e1ae91246f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1216 06:14:53.945120 1600247 system_pods.go:89] "csi-hostpathplugin-ds9r9" [d04f9e6e-41b7-4f6c-86f1-17c1472977bd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1216 06:14:53.945144 1600247 system_pods.go:89] "etcd-addons-142606" [ba28f010-7a70-408c-b883-ac04c6332042] Running
	I1216 06:14:53.945171 1600247 system_pods.go:89] "kindnet-t8fqq" [46de864f-3035-46c4-a475-f9d951f1d51c] Running
	I1216 06:14:53.945205 1600247 system_pods.go:89] "kube-apiserver-addons-142606" [40b526a7-7c2d-46af-99b6-30b1e314240c] Running
	I1216 06:14:53.945231 1600247 system_pods.go:89] "kube-controller-manager-addons-142606" [06aa3b93-c353-45ad-9f5d-41111af2811b] Running
	I1216 06:14:53.945257 1600247 system_pods.go:89] "kube-ingress-dns-minikube" [95e2cd7e-efb8-4796-a7df-017a5f674494] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1216 06:14:53.945281 1600247 system_pods.go:89] "kube-proxy-g5n5p" [68410934-d306-4372-b4f9-6b768fa1f3a1] Running
	I1216 06:14:53.945318 1600247 system_pods.go:89] "kube-scheduler-addons-142606" [9239a891-6dc3-4a9a-bc11-ec5c1ad9d174] Running
	I1216 06:14:53.945349 1600247 system_pods.go:89] "metrics-server-85b7d694d7-t6mbs" [8d8eeda6-d813-458b-807c-e88c9a1a0462] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 06:14:53.945376 1600247 system_pods.go:89] "nvidia-device-plugin-daemonset-w4pvk" [bbe25eae-b43b-4904-bd63-a070c971d855] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1216 06:14:53.945403 1600247 system_pods.go:89] "registry-6b586f9694-prj95" [839dbbf2-5df3-4b37-903a-7afbee677045] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 06:14:53.945435 1600247 system_pods.go:89] "registry-creds-764b6fb674-8vxwt" [42db279c-e8af-4665-a64c-91e4804c2b00] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1216 06:14:53.945463 1600247 system_pods.go:89] "registry-proxy-qh7wq" [bc3adfb9-13a5-45c6-9047-8a229431de86] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1216 06:14:53.945491 1600247 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ljkmz" [eee045ce-7688-4414-b1bc-d826f5823500] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 06:14:53.945521 1600247 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ntp5r" [1c150a9c-ecf0-4b0a-915f-2c11086617cb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 06:14:53.945565 1600247 system_pods.go:89] "storage-provisioner" [f7eb6858-f3fe-49df-9b50-ed902bf3de6f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:14:53.945589 1600247 system_pods.go:126] duration metric: took 1.056387317s to wait for k8s-apps to be running ...
	I1216 06:14:53.945617 1600247 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 06:14:53.945696 1600247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 06:14:54.051751 1600247 system_svc.go:56] duration metric: took 106.125686ms WaitForService to wait for kubelet
	I1216 06:14:54.051826 1600247 kubeadm.go:587] duration metric: took 43.289639094s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 06:14:54.051862 1600247 node_conditions.go:102] verifying NodePressure condition ...
	I1216 06:14:54.063011 1600247 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1216 06:14:54.063198 1600247 node_conditions.go:123] node cpu capacity is 2
	I1216 06:14:54.063231 1600247 node_conditions.go:105] duration metric: took 11.344948ms to run NodePressure ...
	I1216 06:14:54.063272 1600247 start.go:242] waiting for startup goroutines ...
	I1216 06:14:54.067613 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:54.235842 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:54.236529 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:54.352874 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:54.558491 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:54.735851 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:54.736536 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:54.841308 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:55.058207 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:55.236110 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:55.236693 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:55.340676 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:55.558343 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:55.736068 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:55.736385 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:55.840729 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:56.059114 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:56.236349 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:56.237670 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:56.346971 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:56.563490 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:56.737561 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:56.737963 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:56.840584 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:57.066043 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:57.237693 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:57.238160 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:57.340256 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:57.560269 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:57.741142 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:57.742216 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:57.840903 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:58.059201 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:58.236606 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:58.237033 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:58.343410 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:58.558435 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:58.735289 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:58.735444 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:58.840532 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:59.058283 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:59.235299 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:59.235476 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:59.340552 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:59.557672 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:59.735067 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:59.735747 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:59.841587 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:00.071144 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:00.238290 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:00.252036 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:00.370049 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:00.559867 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:00.735667 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:00.736046 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:00.840936 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:01.059533 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:01.236949 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:01.237425 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:01.341126 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:01.558504 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:01.737322 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:01.737785 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:01.840616 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:02.058836 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:02.236397 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:02.236927 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:02.340719 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:02.559136 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:02.735947 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:02.736196 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:02.841122 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:03.058902 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:03.235865 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:03.235990 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:03.340675 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:03.558412 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:03.735648 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:03.736976 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:03.840701 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:04.057587 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:04.235948 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:04.235974 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:04.339988 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:04.558862 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:04.736048 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:04.736459 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:04.841016 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:05.059469 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:05.236706 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:05.237060 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:05.340171 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:05.559049 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:05.734827 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:05.735164 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:05.840740 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:06.058134 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:06.235626 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:06.237301 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:06.340815 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:06.562081 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:06.736365 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:06.737830 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:06.841178 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:07.059936 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:07.238710 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:07.249083 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:07.339789 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:07.559465 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:07.737101 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:07.737426 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:07.840973 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:08.058861 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:08.236592 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:08.237870 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:08.340374 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:08.559786 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:08.754990 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:08.755197 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:08.844696 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:09.060946 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:09.237486 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:09.238258 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:09.340631 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:09.559148 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:09.736392 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:09.736591 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:09.854846 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:10.059286 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:10.236337 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:10.236770 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:10.340295 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:10.563001 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:10.735176 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:10.735974 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:10.840031 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:11.058487 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:11.235512 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:11.236535 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:11.341827 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:11.559299 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:11.736077 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:11.736409 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:11.843539 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:12.058718 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:12.234908 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:12.235102 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:12.344460 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:12.594830 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:12.735445 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:12.735679 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:12.840684 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:13.057840 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:13.236155 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:13.236437 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:13.346468 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:13.558447 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:13.735126 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:13.735301 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:13.841608 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:14.058853 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:14.236304 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:14.236781 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:14.340338 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:14.557989 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:14.736413 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:14.736965 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:14.841371 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:15.058199 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:15.243081 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:15.243248 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:15.341744 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:15.558908 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:15.736426 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:15.737829 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:15.840323 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:16.058541 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:16.235628 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:16.236275 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:16.340555 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:16.557796 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:16.735995 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:16.737263 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:16.840181 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:17.058775 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:17.235965 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:17.236551 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:17.340512 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:17.559648 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:17.735407 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:17.736107 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:17.840303 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:18.058916 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:18.234702 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:18.235103 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:18.341079 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:18.559612 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:18.736984 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:18.737353 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:18.840908 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:19.058377 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:19.235177 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:19.235497 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:19.340590 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:19.559796 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:19.734868 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:19.735385 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:19.843342 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:20.058236 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:20.235328 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:20.235539 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:20.340139 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:20.566799 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:20.735948 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:20.736187 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:20.840288 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:21.057791 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:21.234934 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:21.235768 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:21.339762 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:21.558442 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:21.735494 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:21.738318 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:21.840887 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:22.059115 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:22.235298 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:22.235319 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:22.340336 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:22.558015 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:22.735209 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:22.736378 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:22.840556 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:23.058375 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:23.235901 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:23.236100 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:23.339876 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:23.558844 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:23.735851 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:23.736116 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:23.840563 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:24.058468 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:24.234615 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:24.234738 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:24.341527 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:24.563350 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:24.735115 1600247 kapi.go:107] duration metric: took 1m7.504103062s to wait for kubernetes.io/minikube-addons=registry ...
	I1216 06:15:24.735211 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:24.842783 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:25.060654 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:25.238439 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:25.340058 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:25.558896 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:25.735763 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:25.840680 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:26.058614 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:26.234600 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:26.340134 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:26.557665 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:26.735490 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:26.863961 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:27.059178 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:27.234408 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:27.340898 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:27.559324 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:27.734979 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:27.840619 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:28.059023 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:28.234971 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:28.340244 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:28.557964 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:28.735268 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:28.840922 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:29.059626 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:29.234959 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:29.340224 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:29.557752 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:29.735401 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:29.840727 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:30.077769 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:30.235659 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:30.341158 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:30.564577 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:30.735034 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:30.843470 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:31.058806 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:31.234703 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:31.340866 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:31.558606 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:31.735789 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:31.839830 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:32.058651 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:32.235072 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:32.340180 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:32.557394 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:32.734614 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:32.840663 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:33.060784 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:33.235544 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:33.340627 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:33.558687 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:33.737305 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:33.840620 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:34.059210 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:34.234880 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:34.339985 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:34.558573 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:34.738591 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:34.841074 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:35.059230 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:35.235452 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:35.340777 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:35.558526 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:35.735074 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:35.839968 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:36.058854 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:36.235241 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:36.340426 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:36.557763 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:36.734821 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:36.840224 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:37.057904 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:37.235239 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:37.347151 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:37.559343 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:37.737518 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:37.840703 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:38.066300 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:38.241604 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:38.340532 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:38.560894 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:38.740676 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:38.841541 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:39.060128 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:39.235098 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:39.342549 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:39.558066 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:39.734948 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:39.841920 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:40.059884 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:40.235366 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:40.343675 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:40.557778 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:40.737046 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:40.843672 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:41.059027 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:41.236899 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:41.340849 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:41.559110 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:41.742308 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:41.842299 1600247 kapi.go:107] duration metric: took 1m21.005489102s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1216 06:15:41.846107 1600247 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-142606 cluster.
	I1216 06:15:41.849002 1600247 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1216 06:15:41.851945 1600247 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1216 06:15:42.059112 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:42.235747 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:42.558233 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:42.734437 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:43.057445 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:43.235002 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:43.558715 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:43.735621 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:44.064149 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:44.234673 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:44.559212 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:44.734855 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:45.064046 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:45.238759 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:45.558816 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:45.735156 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:46.057745 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:46.235276 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:46.557968 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:46.735549 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:47.057892 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:47.240887 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:47.558343 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:47.734460 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:48.058060 1600247 kapi.go:107] duration metric: took 1m30.503692488s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1216 06:15:48.234194 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:48.734511 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:49.235390 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:49.735020 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:50.235427 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:50.735608 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:51.234572 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:51.735865 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:52.234588 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:52.735036 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:53.234244 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:53.734815 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:54.234644 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:54.734636 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:55.235532 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:55.734872 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:56.234802 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:56.736169 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:57.234775 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:57.734758 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:58.235168 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:58.735126 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:59.235723 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:59.735139 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:16:00.261924 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:16:00.734940 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:16:01.235232 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:16:01.734868 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:16:02.234883 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:16:02.734463 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:16:03.234802 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:16:03.734130 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:16:04.235422 1600247 kapi.go:107] duration metric: took 1m47.004411088s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1216 06:16:04.240359 1600247 out.go:179] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, inspektor-gadget, storage-provisioner, registry-creds, cloud-spanner, ingress-dns, metrics-server, storage-provisioner-rancher, yakd, default-storageclass, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I1216 06:16:04.243853 1600247 addons.go:530] duration metric: took 1m53.480765171s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin inspektor-gadget storage-provisioner registry-creds cloud-spanner ingress-dns metrics-server storage-provisioner-rancher yakd default-storageclass volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I1216 06:16:04.243918 1600247 start.go:247] waiting for cluster config update ...
	I1216 06:16:04.243941 1600247 start.go:256] writing updated cluster config ...
	I1216 06:16:04.245241 1600247 ssh_runner.go:195] Run: rm -f paused
	I1216 06:16:04.250321 1600247 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 06:16:04.261109 1600247 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hzh7x" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:16:04.275578 1600247 pod_ready.go:94] pod "coredns-66bc5c9577-hzh7x" is "Ready"
	I1216 06:16:04.275659 1600247 pod_ready.go:86] duration metric: took 14.522712ms for pod "coredns-66bc5c9577-hzh7x" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:16:04.281118 1600247 pod_ready.go:83] waiting for pod "etcd-addons-142606" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:16:04.289839 1600247 pod_ready.go:94] pod "etcd-addons-142606" is "Ready"
	I1216 06:16:04.289913 1600247 pod_ready.go:86] duration metric: took 8.721699ms for pod "etcd-addons-142606" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:16:04.293138 1600247 pod_ready.go:83] waiting for pod "kube-apiserver-addons-142606" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:16:04.301578 1600247 pod_ready.go:94] pod "kube-apiserver-addons-142606" is "Ready"
	I1216 06:16:04.301653 1600247 pod_ready.go:86] duration metric: took 8.442771ms for pod "kube-apiserver-addons-142606" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:16:04.304445 1600247 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-142606" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:16:04.653970 1600247 pod_ready.go:94] pod "kube-controller-manager-addons-142606" is "Ready"
	I1216 06:16:04.654005 1600247 pod_ready.go:86] duration metric: took 349.482904ms for pod "kube-controller-manager-addons-142606" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:16:04.855557 1600247 pod_ready.go:83] waiting for pod "kube-proxy-g5n5p" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:16:05.254351 1600247 pod_ready.go:94] pod "kube-proxy-g5n5p" is "Ready"
	I1216 06:16:05.254384 1600247 pod_ready.go:86] duration metric: took 398.800433ms for pod "kube-proxy-g5n5p" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:16:05.455018 1600247 pod_ready.go:83] waiting for pod "kube-scheduler-addons-142606" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:16:05.855052 1600247 pod_ready.go:94] pod "kube-scheduler-addons-142606" is "Ready"
	I1216 06:16:05.855078 1600247 pod_ready.go:86] duration metric: took 400.033111ms for pod "kube-scheduler-addons-142606" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:16:05.855120 1600247 pod_ready.go:40] duration metric: took 1.604739835s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 06:16:05.919372 1600247 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1216 06:16:05.923168 1600247 out.go:179] * Done! kubectl is now configured to use "addons-142606" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 16 06:19:05 addons-142606 crio[829]: time="2025-12-16T06:19:05.601468885Z" level=info msg="Removed container 116d8d4922fb774ccbf1c02c05337e60a1e1f34e4758c1ab2e8e2782f0a3702a: kube-system/registry-creds-764b6fb674-8vxwt/registry-creds" id=1b818e3d-72d3-4281-a3ce-f2de59e8605d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 06:19:06 addons-142606 crio[829]: time="2025-12-16T06:19:06.287164227Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-288ws/POD" id=37780170-8bd4-4b81-b66c-ad64b593594a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 06:19:06 addons-142606 crio[829]: time="2025-12-16T06:19:06.287256503Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 06:19:06 addons-142606 crio[829]: time="2025-12-16T06:19:06.304560723Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-288ws Namespace:default ID:15ddc74b53803e6cce807221ca3e4e613765a23a305bdc3cd763243fed90bb53 UID:2a05a09d-d9f1-4aef-b5b4-bc206c3ef83d NetNS:/var/run/netns/2df2ddf5-7319-4b60-ae22-653b9c5b50a1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40018b8ef8}] Aliases:map[]}"
	Dec 16 06:19:06 addons-142606 crio[829]: time="2025-12-16T06:19:06.305555663Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-288ws to CNI network \"kindnet\" (type=ptp)"
	Dec 16 06:19:06 addons-142606 crio[829]: time="2025-12-16T06:19:06.323799422Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-288ws Namespace:default ID:15ddc74b53803e6cce807221ca3e4e613765a23a305bdc3cd763243fed90bb53 UID:2a05a09d-d9f1-4aef-b5b4-bc206c3ef83d NetNS:/var/run/netns/2df2ddf5-7319-4b60-ae22-653b9c5b50a1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40018b8ef8}] Aliases:map[]}"
	Dec 16 06:19:06 addons-142606 crio[829]: time="2025-12-16T06:19:06.32416105Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-288ws for CNI network kindnet (type=ptp)"
	Dec 16 06:19:06 addons-142606 crio[829]: time="2025-12-16T06:19:06.329242539Z" level=info msg="Ran pod sandbox 15ddc74b53803e6cce807221ca3e4e613765a23a305bdc3cd763243fed90bb53 with infra container: default/hello-world-app-5d498dc89-288ws/POD" id=37780170-8bd4-4b81-b66c-ad64b593594a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 06:19:06 addons-142606 crio[829]: time="2025-12-16T06:19:06.330783691Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=2803e357-e664-43cc-8617-b8275c389da8 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:19:06 addons-142606 crio[829]: time="2025-12-16T06:19:06.331319368Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=2803e357-e664-43cc-8617-b8275c389da8 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:19:06 addons-142606 crio[829]: time="2025-12-16T06:19:06.331494804Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=2803e357-e664-43cc-8617-b8275c389da8 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:19:06 addons-142606 crio[829]: time="2025-12-16T06:19:06.334795942Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=63f4dbdf-eeed-40c9-83a7-4976b228f527 name=/runtime.v1.ImageService/PullImage
	Dec 16 06:19:06 addons-142606 crio[829]: time="2025-12-16T06:19:06.339154724Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 16 06:19:06 addons-142606 crio[829]: time="2025-12-16T06:19:06.994231696Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=63f4dbdf-eeed-40c9-83a7-4976b228f527 name=/runtime.v1.ImageService/PullImage
	Dec 16 06:19:06 addons-142606 crio[829]: time="2025-12-16T06:19:06.994878685Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=bfd4dd22-5ee1-4f26-82cc-0e3270e931c9 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:19:06 addons-142606 crio[829]: time="2025-12-16T06:19:06.998601932Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=93bd2ae3-1443-476a-8dce-8a738d48d21e name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:19:07 addons-142606 crio[829]: time="2025-12-16T06:19:07.006786887Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-288ws/hello-world-app" id=2ec556a6-5658-4b7b-aa79-0b3c476bb395 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 06:19:07 addons-142606 crio[829]: time="2025-12-16T06:19:07.007125688Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 06:19:07 addons-142606 crio[829]: time="2025-12-16T06:19:07.025182097Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 06:19:07 addons-142606 crio[829]: time="2025-12-16T06:19:07.025461534Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e162e9888b749119686141f06f74de811bf07ed87dc8562c193cc0c6dbf7eec8/merged/etc/passwd: no such file or directory"
	Dec 16 06:19:07 addons-142606 crio[829]: time="2025-12-16T06:19:07.025490178Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e162e9888b749119686141f06f74de811bf07ed87dc8562c193cc0c6dbf7eec8/merged/etc/group: no such file or directory"
	Dec 16 06:19:07 addons-142606 crio[829]: time="2025-12-16T06:19:07.025843569Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 06:19:07 addons-142606 crio[829]: time="2025-12-16T06:19:07.04713837Z" level=info msg="Created container 8e8fe4fd92263b383871c20642e895034bafe8a32a74ded2084c6cd1d88fd0fb: default/hello-world-app-5d498dc89-288ws/hello-world-app" id=2ec556a6-5658-4b7b-aa79-0b3c476bb395 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 06:19:07 addons-142606 crio[829]: time="2025-12-16T06:19:07.050295753Z" level=info msg="Starting container: 8e8fe4fd92263b383871c20642e895034bafe8a32a74ded2084c6cd1d88fd0fb" id=87ab6ef1-87df-49ad-a28c-3f5baaa9b6f0 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 06:19:07 addons-142606 crio[829]: time="2025-12-16T06:19:07.052929397Z" level=info msg="Started container" PID=7069 containerID=8e8fe4fd92263b383871c20642e895034bafe8a32a74ded2084c6cd1d88fd0fb description=default/hello-world-app-5d498dc89-288ws/hello-world-app id=87ab6ef1-87df-49ad-a28c-3f5baaa9b6f0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=15ddc74b53803e6cce807221ca3e4e613765a23a305bdc3cd763243fed90bb53
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	8e8fe4fd92263       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   15ddc74b53803       hello-world-app-5d498dc89-288ws             default
	6fb9a4bfa9caa       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             3 seconds ago            Exited              registry-creds                           1                   8fb5a967c277a       registry-creds-764b6fb674-8vxwt             kube-system
	01f697fb15fde       public.ecr.aws/nginx/nginx@sha256:2faa7e87b6fbce823070978247970cea2ad90b1936e84eeae1bd2680b03c168d                                           2 minutes ago            Running             nginx                                    0                   c0edfc5555130       nginx                                       default
	a39808548c99c       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          2 minutes ago            Running             busybox                                  0                   adc631c8ab736       busybox                                     default
	b5e6638bf6970       registry.k8s.io/ingress-nginx/controller@sha256:75494e2145fbebf362d24e24e9285b7fbb7da8783ab272092e3126e24ee4776d                             3 minutes ago            Running             controller                               0                   32fcdec6fe4c9       ingress-nginx-controller-85d4c799dd-lswbn   ingress-nginx
	6703e84bcca40       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   08f5c73b4d608       csi-hostpathplugin-ds9r9                    kube-system
	88067168cfc83       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   08f5c73b4d608       csi-hostpathplugin-ds9r9                    kube-system
	6731eaf9efe44       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   08f5c73b4d608       csi-hostpathplugin-ds9r9                    kube-system
	0fee244cfec70       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   08f5c73b4d608       csi-hostpathplugin-ds9r9                    kube-system
	28c54e5bde756       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   08f5c73b4d608       csi-hostpathplugin-ds9r9                    kube-system
	d8f84e055fa4a       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   29532cbb89ace       gcp-auth-78565c9fb4-cvfrs                   gcp-auth
	3e14d8c6a72b9       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:fadc7bf59b69965b6707edb68022bed4f55a1f99b15f7acd272793e48f171496                            3 minutes ago            Running             gadget                                   0                   dee9e4bf0489f       gadget-gr47j                                gadget
	165110b3c1752       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   08f5c73b4d608       csi-hostpathplugin-ds9r9                    kube-system
	c5f817c74f04c       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago            Running             csi-attacher                             0                   6c4e5d8283fd0       csi-hostpath-attacher-0                     kube-system
	83c2340bd3725       e8105550077f5c6c8e92536651451107053f0e41635396ee42aef596441c179a                                                                             3 minutes ago            Exited              patch                                    1                   4a3beda6b5f20       ingress-nginx-admission-patch-2jxxc         ingress-nginx
	978cf1e646b5f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   3 minutes ago            Exited              create                                   0                   48347a6d56575       ingress-nginx-admission-create-n8gg2        ingress-nginx
	0d582f614e063       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   6b1f76b9dd2c4       nvidia-device-plugin-daemonset-w4pvk        kube-system
	7818fd4ffad1e       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   a59d38e23d2df       csi-hostpath-resizer-0                      kube-system
	a433ea848c0b6       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   ae31fa60b7bf5       registry-proxy-qh7wq                        kube-system
	161c43bb0c1f0       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   6098fdaacb978       snapshot-controller-7d9fbc56b8-ljkmz        kube-system
	794d61cf642b9       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago            Running             yakd                                     0                   d164098573082       yakd-dashboard-5ff678cb9-g8xrb              yakd-dashboard
	8abe529e41335       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   70ea98e42e819       snapshot-controller-7d9fbc56b8-ntp5r        kube-system
	a9c9065484348       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           3 minutes ago            Running             registry                                 0                   cb2c98f715668       registry-6b586f9694-prj95                   kube-system
	b44a527d8f947       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago            Running             local-path-provisioner                   0                   8264b5ab0fda8       local-path-provisioner-648f6765c9-p55rr     local-path-storage
	c26c874f20ada       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago            Running             minikube-ingress-dns                     0                   056e765782b41       kube-ingress-dns-minikube                   kube-system
	a6a3cfe490f36       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               4 minutes ago            Running             cloud-spanner-emulator                   0                   f66f4a99edf04       cloud-spanner-emulator-5bdddb765-fdxwf      default
	ce1c26a19229b       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        4 minutes ago            Running             metrics-server                           0                   ea4f10ad66d82       metrics-server-85b7d694d7-t6mbs             kube-system
	05deafa86a477       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   2f033fd13c808       storage-provisioner                         kube-system
	3ba24e9ad28c6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   812830e954ef1       coredns-66bc5c9577-hzh7x                    kube-system
	420ec82bb1093       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                                                             4 minutes ago            Running             kube-proxy                               0                   62c6b2b311cdc       kube-proxy-g5n5p                            kube-system
	200b85d246fd0       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago            Running             kindnet-cni                              0                   3fb6e285e445f       kindnet-t8fqq                               kube-system
	df77467f393ab       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             5 minutes ago            Running             etcd                                     0                   d5ff8aa59ae82       etcd-addons-142606                          kube-system
	579811cebcc83       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                                                             5 minutes ago            Running             kube-scheduler                           0                   80c1f7eac1e9f       kube-scheduler-addons-142606                kube-system
	f245307e594fb       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                                                             5 minutes ago            Running             kube-controller-manager                  0                   4603e2976e9c2       kube-controller-manager-addons-142606       kube-system
	c9eb26e694306       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                                                             5 minutes ago            Running             kube-apiserver                           0                   630d75d0c4d56       kube-apiserver-addons-142606                kube-system
	
	
	==> coredns [3ba24e9ad28c6c3240f0dfc5f6682f61f94d490a15253cca8ed8af56ecef50b8] <==
	[INFO] 10.244.0.10:57034 - 8624 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.00246492s
	[INFO] 10.244.0.10:57034 - 21401 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000135124s
	[INFO] 10.244.0.10:57034 - 18084 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000094778s
	[INFO] 10.244.0.10:38179 - 30117 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000155005s
	[INFO] 10.244.0.10:38179 - 29895 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000082413s
	[INFO] 10.244.0.10:50531 - 46224 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000133523s
	[INFO] 10.244.0.10:50531 - 46043 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000158689s
	[INFO] 10.244.0.10:49525 - 9094 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000109564s
	[INFO] 10.244.0.10:49525 - 8906 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000137486s
	[INFO] 10.244.0.10:45727 - 42048 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001446407s
	[INFO] 10.244.0.10:45727 - 41859 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001497394s
	[INFO] 10.244.0.10:53308 - 44634 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000123464s
	[INFO] 10.244.0.10:53308 - 44814 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000464973s
	[INFO] 10.244.0.20:57954 - 25106 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000217644s
	[INFO] 10.244.0.20:58198 - 50745 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00017249s
	[INFO] 10.244.0.20:49658 - 5258 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000137199s
	[INFO] 10.244.0.20:51292 - 50006 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000125105s
	[INFO] 10.244.0.20:45806 - 53631 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000303987s
	[INFO] 10.244.0.20:60210 - 9172 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000320316s
	[INFO] 10.244.0.20:34277 - 39597 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00263695s
	[INFO] 10.244.0.20:43917 - 16084 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003460828s
	[INFO] 10.244.0.20:35135 - 28426 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002211511s
	[INFO] 10.244.0.20:60089 - 59835 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001838215s
	[INFO] 10.244.0.23:59803 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000185347s
	[INFO] 10.244.0.23:46497 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000156457s
	
	
	==> describe nodes <==
	Name:               addons-142606
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-142606
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f
	                    minikube.k8s.io/name=addons-142606
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T06_14_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-142606
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-142606"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 06:14:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-142606
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 06:19:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 06:18:12 +0000   Tue, 16 Dec 2025 06:13:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 06:18:12 +0000   Tue, 16 Dec 2025 06:13:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 06:18:12 +0000   Tue, 16 Dec 2025 06:13:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 06:18:12 +0000   Tue, 16 Dec 2025 06:14:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-142606
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                a2823150-fbe0-44c8-b17f-fc2660ac30ce
	  Boot ID:                    c02b8f3a-b639-46a9-b38c-18c198a7a8c0
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	  default                     cloud-spanner-emulator-5bdddb765-fdxwf       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  default                     hello-world-app-5d498dc89-288ws              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  gadget                      gadget-gr47j                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  gcp-auth                    gcp-auth-78565c9fb4-cvfrs                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-lswbn    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m51s
	  kube-system                 coredns-66bc5c9577-hzh7x                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m57s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 csi-hostpathplugin-ds9r9                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 etcd-addons-142606                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m4s
	  kube-system                 kindnet-t8fqq                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m57s
	  kube-system                 kube-apiserver-addons-142606                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 kube-controller-manager-addons-142606        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 kube-proxy-g5n5p                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 kube-scheduler-addons-142606                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 metrics-server-85b7d694d7-t6mbs              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m53s
	  kube-system                 nvidia-device-plugin-daemonset-w4pvk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 registry-6b586f9694-prj95                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 registry-creds-764b6fb674-8vxwt              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 registry-proxy-qh7wq                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 snapshot-controller-7d9fbc56b8-ljkmz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 snapshot-controller-7d9fbc56b8-ntp5r         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  local-path-storage          local-path-provisioner-648f6765c9-p55rr      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-g8xrb               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 4m55s                kube-proxy       
	  Normal   Starting                 5m9s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m9s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m9s (x8 over 5m9s)  kubelet          Node addons-142606 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m9s (x8 over 5m9s)  kubelet          Node addons-142606 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m9s (x8 over 5m9s)  kubelet          Node addons-142606 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m2s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m2s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m2s                 kubelet          Node addons-142606 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m2s                 kubelet          Node addons-142606 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m2s                 kubelet          Node addons-142606 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m58s                node-controller  Node addons-142606 event: Registered Node addons-142606 in Controller
	  Normal   NodeReady                4m16s                kubelet          Node addons-142606 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec16 06:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec16 06:13] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [df77467f393ab9f56a05b6bda0282ec85b78e7479554e9a66b909f66844386c1] <==
	{"level":"warn","ts":"2025-12-16T06:14:01.864289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:01.886509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:01.916947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:01.951084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:01.980894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:02.006508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:02.018143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:02.034110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:02.054085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:02.065828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:02.092555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:02.109794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:02.125731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:02.140718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:02.190266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:02.219304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:02.235742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:02.272560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:02.308690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:17.826441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:17.850321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:40.268998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:40.296015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:40.312463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:40.333996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56034","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [d8f84e055fa4ad5627655e68e01268675ea80482afc04b1ce4728b8d18407e57] <==
	2025/12/16 06:15:40 GCP Auth Webhook started!
	2025/12/16 06:16:06 Ready to marshal response ...
	2025/12/16 06:16:06 Ready to write response ...
	2025/12/16 06:16:06 Ready to marshal response ...
	2025/12/16 06:16:06 Ready to write response ...
	2025/12/16 06:16:06 Ready to marshal response ...
	2025/12/16 06:16:06 Ready to write response ...
	2025/12/16 06:16:28 Ready to marshal response ...
	2025/12/16 06:16:28 Ready to write response ...
	2025/12/16 06:16:33 Ready to marshal response ...
	2025/12/16 06:16:33 Ready to write response ...
	2025/12/16 06:16:33 Ready to marshal response ...
	2025/12/16 06:16:33 Ready to write response ...
	2025/12/16 06:16:42 Ready to marshal response ...
	2025/12/16 06:16:42 Ready to write response ...
	2025/12/16 06:16:44 Ready to marshal response ...
	2025/12/16 06:16:44 Ready to write response ...
	2025/12/16 06:16:53 Ready to marshal response ...
	2025/12/16 06:16:53 Ready to write response ...
	2025/12/16 06:17:18 Ready to marshal response ...
	2025/12/16 06:17:18 Ready to write response ...
	2025/12/16 06:19:05 Ready to marshal response ...
	2025/12/16 06:19:05 Ready to write response ...
	
	
	==> kernel <==
	 06:19:08 up  9:01,  0 user,  load average: 0.28, 1.08, 1.64
	Linux addons-142606 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [200b85d246fd05ce6515e2957bed839b1cd509fbd6973e7fb7b76cfc92dc0e92] <==
	I1216 06:17:02.358769       1 main.go:301] handling current node
	I1216 06:17:12.358175       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 06:17:12.358327       1 main.go:301] handling current node
	I1216 06:17:22.357743       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 06:17:22.357796       1 main.go:301] handling current node
	I1216 06:17:32.360551       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 06:17:32.360583       1 main.go:301] handling current node
	I1216 06:17:42.364551       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 06:17:42.364589       1 main.go:301] handling current node
	I1216 06:17:52.358676       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 06:17:52.358778       1 main.go:301] handling current node
	I1216 06:18:02.358051       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 06:18:02.358193       1 main.go:301] handling current node
	I1216 06:18:12.357061       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 06:18:12.357098       1 main.go:301] handling current node
	I1216 06:18:22.364881       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 06:18:22.364991       1 main.go:301] handling current node
	I1216 06:18:32.358052       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 06:18:32.358086       1 main.go:301] handling current node
	I1216 06:18:42.357308       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 06:18:42.357343       1 main.go:301] handling current node
	I1216 06:18:52.360680       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 06:18:52.360714       1 main.go:301] handling current node
	I1216 06:19:02.364559       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 06:19:02.364599       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c9eb26e694306fb2badad1b156e8c43cd7669aeea899bdaf4f5005d8c36ce56e] <==
	W1216 06:14:40.287914       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1216 06:14:40.312418       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1216 06:14:40.327490       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1216 06:14:52.702339       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.230.18:443: connect: connection refused
	E1216 06:14:52.702388       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.230.18:443: connect: connection refused" logger="UnhandledError"
	W1216 06:14:52.702829       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.230.18:443: connect: connection refused
	E1216 06:14:52.702860       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.230.18:443: connect: connection refused" logger="UnhandledError"
	W1216 06:14:52.788800       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.230.18:443: connect: connection refused
	E1216 06:14:52.788847       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.230.18:443: connect: connection refused" logger="UnhandledError"
	E1216 06:15:07.409179       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.210.185:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.210.185:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.210.185:443: connect: connection refused" logger="UnhandledError"
	W1216 06:15:07.410281       1 handler_proxy.go:99] no RequestInfo found in the context
	E1216 06:15:07.410371       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1216 06:15:07.489650       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1216 06:15:07.496843       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1216 06:16:16.979741       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:39880: use of closed network connection
	E1216 06:16:17.208831       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:39912: use of closed network connection
	E1216 06:16:17.347778       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:39928: use of closed network connection
	I1216 06:16:44.591302       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1216 06:16:44.907362       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.183.229"}
	I1216 06:16:59.484328       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1216 06:17:26.236402       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1216 06:19:06.178904       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.218.211"}
	
	
	==> kube-controller-manager [f245307e594fbb88a44a0deec519111b1a88c9ff3bfc81884eb0fff4916d96b2] <==
	I1216 06:14:10.295766       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1216 06:14:10.295836       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1216 06:14:10.295767       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1216 06:14:10.295880       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1216 06:14:10.295967       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1216 06:14:10.296301       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1216 06:14:10.296881       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1216 06:14:10.298108       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1216 06:14:10.298351       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1216 06:14:10.298377       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1216 06:14:10.298399       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1216 06:14:10.299792       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1216 06:14:10.305033       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1216 06:14:10.344670       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 06:14:10.344884       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1216 06:14:10.344904       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1216 06:14:15.953975       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1216 06:14:40.261093       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 06:14:40.261255       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1216 06:14:40.261315       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1216 06:14:40.293651       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1216 06:14:40.298938       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1216 06:14:40.361565       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 06:14:40.399958       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 06:14:55.253233       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [420ec82bb10934672e188d3b5c75b4015e4ddff1993a56b334897407409b4e9b] <==
	I1216 06:14:12.238672       1 server_linux.go:53] "Using iptables proxy"
	I1216 06:14:12.358328       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1216 06:14:12.459064       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 06:14:12.459102       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1216 06:14:12.459185       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 06:14:12.560556       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 06:14:12.560604       1 server_linux.go:132] "Using iptables Proxier"
	I1216 06:14:12.578309       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 06:14:12.578648       1 server.go:527] "Version info" version="v1.34.2"
	I1216 06:14:12.578664       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 06:14:12.580042       1 config.go:200] "Starting service config controller"
	I1216 06:14:12.580052       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 06:14:12.580070       1 config.go:106] "Starting endpoint slice config controller"
	I1216 06:14:12.580074       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 06:14:12.580086       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 06:14:12.580090       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 06:14:12.580728       1 config.go:309] "Starting node config controller"
	I1216 06:14:12.580736       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 06:14:12.580742       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 06:14:12.680334       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1216 06:14:12.680367       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 06:14:12.680406       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [579811cebcc8368d345e24f90d842a2c3691b61c760bea541d93287864a6257a] <==
	I1216 06:14:02.386682       1 serving.go:386] Generated self-signed cert in-memory
	W1216 06:14:04.827633       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1216 06:14:04.827749       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1216 06:14:04.827783       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1216 06:14:04.827812       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1216 06:14:04.849723       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1216 06:14:04.850358       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 06:14:04.852414       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 06:14:04.852514       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1216 06:14:04.853984       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1216 06:14:04.854450       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1216 06:14:04.854512       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1216 06:14:06.054770       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 16 06:17:26 addons-142606 kubelet[1283]: I1216 06:17:26.215339    1283 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-15914de1-8fd1-4d55-9e81-ae7260d5356c\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^df3bd0f9-da46-11f0-9c0f-36881a696a1a\") on node \"addons-142606\" "
	Dec 16 06:17:26 addons-142606 kubelet[1283]: I1216 06:17:26.222852    1283 scope.go:117] "RemoveContainer" containerID="cbec5d79dc8813a81f56f484f6d6e64583d3dbe7943758c58851ee997f106fa5"
	Dec 16 06:17:26 addons-142606 kubelet[1283]: E1216 06:17:26.223519    1283 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbec5d79dc8813a81f56f484f6d6e64583d3dbe7943758c58851ee997f106fa5\": container with ID starting with cbec5d79dc8813a81f56f484f6d6e64583d3dbe7943758c58851ee997f106fa5 not found: ID does not exist" containerID="cbec5d79dc8813a81f56f484f6d6e64583d3dbe7943758c58851ee997f106fa5"
	Dec 16 06:17:26 addons-142606 kubelet[1283]: I1216 06:17:26.223754    1283 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbec5d79dc8813a81f56f484f6d6e64583d3dbe7943758c58851ee997f106fa5"} err="failed to get container status \"cbec5d79dc8813a81f56f484f6d6e64583d3dbe7943758c58851ee997f106fa5\": rpc error: code = NotFound desc = could not find container \"cbec5d79dc8813a81f56f484f6d6e64583d3dbe7943758c58851ee997f106fa5\": container with ID starting with cbec5d79dc8813a81f56f484f6d6e64583d3dbe7943758c58851ee997f106fa5 not found: ID does not exist"
	Dec 16 06:17:26 addons-142606 kubelet[1283]: I1216 06:17:26.226433    1283 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-15914de1-8fd1-4d55-9e81-ae7260d5356c" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^df3bd0f9-da46-11f0-9c0f-36881a696a1a") on node "addons-142606"
	Dec 16 06:17:26 addons-142606 kubelet[1283]: I1216 06:17:26.316635    1283 reconciler_common.go:299] "Volume detached for volume \"pvc-15914de1-8fd1-4d55-9e81-ae7260d5356c\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^df3bd0f9-da46-11f0-9c0f-36881a696a1a\") on node \"addons-142606\" DevicePath \"\""
	Dec 16 06:17:28 addons-142606 kubelet[1283]: I1216 06:17:28.070583    1283 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2399f1d1-dae3-43b2-8403-28cbc7343861" path="/var/lib/kubelet/pods/2399f1d1-dae3-43b2-8403-28cbc7343861/volumes"
	Dec 16 06:18:03 addons-142606 kubelet[1283]: I1216 06:18:03.067307    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-qh7wq" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 06:18:22 addons-142606 kubelet[1283]: I1216 06:18:22.067865    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-w4pvk" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 06:18:56 addons-142606 kubelet[1283]: I1216 06:18:56.068635    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-prj95" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 06:19:03 addons-142606 kubelet[1283]: I1216 06:19:03.070408    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-8vxwt" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 06:19:04 addons-142606 kubelet[1283]: I1216 06:19:04.572667    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-8vxwt" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 06:19:04 addons-142606 kubelet[1283]: I1216 06:19:04.572724    1283 scope.go:117] "RemoveContainer" containerID="116d8d4922fb774ccbf1c02c05337e60a1e1f34e4758c1ab2e8e2782f0a3702a"
	Dec 16 06:19:05 addons-142606 kubelet[1283]: I1216 06:19:05.578120    1283 scope.go:117] "RemoveContainer" containerID="116d8d4922fb774ccbf1c02c05337e60a1e1f34e4758c1ab2e8e2782f0a3702a"
	Dec 16 06:19:05 addons-142606 kubelet[1283]: I1216 06:19:05.578434    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-8vxwt" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 06:19:05 addons-142606 kubelet[1283]: I1216 06:19:05.578468    1283 scope.go:117] "RemoveContainer" containerID="6fb9a4bfa9caaebd212421ecc1cc702035bc2af38f3036ac5ca6c8fe0e994e82"
	Dec 16 06:19:05 addons-142606 kubelet[1283]: E1216 06:19:05.578628    1283 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-8vxwt_kube-system(42db279c-e8af-4665-a64c-91e4804c2b00)\"" pod="kube-system/registry-creds-764b6fb674-8vxwt" podUID="42db279c-e8af-4665-a64c-91e4804c2b00"
	Dec 16 06:19:06 addons-142606 kubelet[1283]: I1216 06:19:06.067085    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/2a05a09d-d9f1-4aef-b5b4-bc206c3ef83d-gcp-creds\") pod \"hello-world-app-5d498dc89-288ws\" (UID: \"2a05a09d-d9f1-4aef-b5b4-bc206c3ef83d\") " pod="default/hello-world-app-5d498dc89-288ws"
	Dec 16 06:19:06 addons-142606 kubelet[1283]: I1216 06:19:06.067952    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqrxq\" (UniqueName: \"kubernetes.io/projected/2a05a09d-d9f1-4aef-b5b4-bc206c3ef83d-kube-api-access-gqrxq\") pod \"hello-world-app-5d498dc89-288ws\" (UID: \"2a05a09d-d9f1-4aef-b5b4-bc206c3ef83d\") " pod="default/hello-world-app-5d498dc89-288ws"
	Dec 16 06:19:06 addons-142606 kubelet[1283]: E1216 06:19:06.210006    1283 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/143f760118650f87a86d64a356c79a6f5aeafb8294ea95747c71a3d4d65296ac/diff" to get inode usage: stat /var/lib/containers/storage/overlay/143f760118650f87a86d64a356c79a6f5aeafb8294ea95747c71a3d4d65296ac/diff: no such file or directory, extraDiskErr: <nil>
	Dec 16 06:19:06 addons-142606 kubelet[1283]: W1216 06:19:06.326549    1283 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/bf001fc7b73962d265e364d5dc0a0431f53d593dbeb39b7f63abe2349353c726/crio-15ddc74b53803e6cce807221ca3e4e613765a23a305bdc3cd763243fed90bb53 WatchSource:0}: Error finding container 15ddc74b53803e6cce807221ca3e4e613765a23a305bdc3cd763243fed90bb53: Status 404 returned error can't find the container with id 15ddc74b53803e6cce807221ca3e4e613765a23a305bdc3cd763243fed90bb53
	Dec 16 06:19:06 addons-142606 kubelet[1283]: I1216 06:19:06.590826    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-8vxwt" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 06:19:06 addons-142606 kubelet[1283]: I1216 06:19:06.590883    1283 scope.go:117] "RemoveContainer" containerID="6fb9a4bfa9caaebd212421ecc1cc702035bc2af38f3036ac5ca6c8fe0e994e82"
	Dec 16 06:19:06 addons-142606 kubelet[1283]: E1216 06:19:06.591033    1283 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-8vxwt_kube-system(42db279c-e8af-4665-a64c-91e4804c2b00)\"" pod="kube-system/registry-creds-764b6fb674-8vxwt" podUID="42db279c-e8af-4665-a64c-91e4804c2b00"
	Dec 16 06:19:07 addons-142606 kubelet[1283]: I1216 06:19:07.635892    1283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-288ws" podStartSLOduration=1.971847725 podStartE2EDuration="2.635871449s" podCreationTimestamp="2025-12-16 06:19:05 +0000 UTC" firstStartedPulling="2025-12-16 06:19:06.331861175 +0000 UTC m=+300.408553675" lastFinishedPulling="2025-12-16 06:19:06.995884899 +0000 UTC m=+301.072577399" observedRunningTime="2025-12-16 06:19:07.633565932 +0000 UTC m=+301.710258432" watchObservedRunningTime="2025-12-16 06:19:07.635871449 +0000 UTC m=+301.712563957"
	
	
	==> storage-provisioner [05deafa86a4775c771a7a4f91648e7d0bbbde3e86afb8ded084631f76dadb3ea] <==
	W1216 06:18:43.087697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:18:45.097327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:18:45.117439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:18:47.120808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:18:47.125075       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:18:49.128956       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:18:49.135310       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:18:51.138447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:18:51.149241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:18:53.152277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:18:53.156799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:18:55.160096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:18:55.167413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:18:57.170430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:18:57.177198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:18:59.179819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:18:59.184924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:19:01.188351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:19:01.193432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:19:03.196444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:19:03.201226       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:19:05.205082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:19:05.209642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:19:07.214018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:19:07.223628       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-142606 -n addons-142606
helpers_test.go:270: (dbg) Run:  kubectl --context addons-142606 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-n8gg2 ingress-nginx-admission-patch-2jxxc
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-142606 describe pod ingress-nginx-admission-create-n8gg2 ingress-nginx-admission-patch-2jxxc
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-142606 describe pod ingress-nginx-admission-create-n8gg2 ingress-nginx-admission-patch-2jxxc: exit status 1 (95.239233ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-n8gg2" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-2jxxc" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-142606 describe pod ingress-nginx-admission-create-n8gg2 ingress-nginx-admission-patch-2jxxc: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-142606 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-142606 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (297.994166ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 06:19:09.330399 1609790 out.go:360] Setting OutFile to fd 1 ...
	I1216 06:19:09.331472 1609790 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:19:09.331517 1609790 out.go:374] Setting ErrFile to fd 2...
	I1216 06:19:09.331540 1609790 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:19:09.331862 1609790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 06:19:09.332196 1609790 mustload.go:66] Loading cluster: addons-142606
	I1216 06:19:09.332681 1609790 config.go:182] Loaded profile config "addons-142606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 06:19:09.332721 1609790 addons.go:622] checking whether the cluster is paused
	I1216 06:19:09.332858 1609790 config.go:182] Loaded profile config "addons-142606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 06:19:09.332887 1609790 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:19:09.333438 1609790 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:19:09.374114 1609790 ssh_runner.go:195] Run: systemctl --version
	I1216 06:19:09.374174 1609790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:19:09.401177 1609790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:19:09.499985 1609790 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 06:19:09.500086 1609790 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 06:19:09.531774 1609790 cri.go:89] found id: "6fb9a4bfa9caaebd212421ecc1cc702035bc2af38f3036ac5ca6c8fe0e994e82"
	I1216 06:19:09.531800 1609790 cri.go:89] found id: "6703e84bcca40b0a594cdf475f863c47388bb3473689dc6dc9131665ff15c722"
	I1216 06:19:09.531805 1609790 cri.go:89] found id: "88067168cfc835ec13a085c3301fa7a0945be84d1b32a45d6272d448705a93b4"
	I1216 06:19:09.531808 1609790 cri.go:89] found id: "6731eaf9efe44a3841c9077eb710e80717cb11a2633c9eb508ffa19f6164b80b"
	I1216 06:19:09.531812 1609790 cri.go:89] found id: "0fee244cfec70855196eca6cad232f4f73eacca15c942ffd069d11069d1f4cb4"
	I1216 06:19:09.531817 1609790 cri.go:89] found id: "28c54e5bde7563267d05bd0f6de8f8354c40960cc47cbfeddccee412b7fe46cb"
	I1216 06:19:09.531820 1609790 cri.go:89] found id: "165110b3c17520e8c6e3174b457d4b772ec69f7de1b240977205602417ac9de3"
	I1216 06:19:09.531823 1609790 cri.go:89] found id: "c5f817c74f04cb31c27fa9bc66b75d3c6e1e311d53f849b092257a1349eaad01"
	I1216 06:19:09.531826 1609790 cri.go:89] found id: "0d582f614e063962306cbbdc21f8d638fb8050f967014d21a8f487be01601d41"
	I1216 06:19:09.531832 1609790 cri.go:89] found id: "7818fd4ffad1eb8a31a4b1ce98a30ce679666d9ced58f83c7d6b7fee8bc7af95"
	I1216 06:19:09.531836 1609790 cri.go:89] found id: "a433ea848c0b6c9e22d66f89f96c00d59764c3b355eeeff67fb834e499878a37"
	I1216 06:19:09.531839 1609790 cri.go:89] found id: "161c43bb0c1f09a93b4ab4d3dc787f8c4cc654b82a0e5d755a7b156616025ca2"
	I1216 06:19:09.531843 1609790 cri.go:89] found id: "8abe529e41335b3b05795d10efae252a394efe05ef733112fb2247a1e4fd1c92"
	I1216 06:19:09.531846 1609790 cri.go:89] found id: "a9c90654843486bc32f90e8f985ddbacf205a101da2c3dd5ca31b342bfe712a4"
	I1216 06:19:09.531850 1609790 cri.go:89] found id: "c26c874f20adaf4ff3f5bebc23a86ae73f7dd3cecb90066667bc3a5d3a0c6c19"
	I1216 06:19:09.531860 1609790 cri.go:89] found id: "ce1c26a19229b31cbab4c845e9e6724afb7c05777e5212c27130f447ee267f04"
	I1216 06:19:09.531868 1609790 cri.go:89] found id: "05deafa86a4775c771a7a4f91648e7d0bbbde3e86afb8ded084631f76dadb3ea"
	I1216 06:19:09.531875 1609790 cri.go:89] found id: "3ba24e9ad28c6c3240f0dfc5f6682f61f94d490a15253cca8ed8af56ecef50b8"
	I1216 06:19:09.531879 1609790 cri.go:89] found id: "420ec82bb10934672e188d3b5c75b4015e4ddff1993a56b334897407409b4e9b"
	I1216 06:19:09.531882 1609790 cri.go:89] found id: "200b85d246fd05ce6515e2957bed839b1cd509fbd6973e7fb7b76cfc92dc0e92"
	I1216 06:19:09.531887 1609790 cri.go:89] found id: "df77467f393ab9f56a05b6bda0282ec85b78e7479554e9a66b909f66844386c1"
	I1216 06:19:09.531890 1609790 cri.go:89] found id: "579811cebcc8368d345e24f90d842a2c3691b61c760bea541d93287864a6257a"
	I1216 06:19:09.531893 1609790 cri.go:89] found id: "f245307e594fbb88a44a0deec519111b1a88c9ff3bfc81884eb0fff4916d96b2"
	I1216 06:19:09.531896 1609790 cri.go:89] found id: "c9eb26e694306fb2badad1b156e8c43cd7669aeea899bdaf4f5005d8c36ce56e"
	I1216 06:19:09.531899 1609790 cri.go:89] found id: ""
	I1216 06:19:09.531955 1609790 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 06:19:09.548106 1609790 out.go:203] 
	W1216 06:19:09.551082 1609790 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T06:19:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T06:19:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 06:19:09.551111 1609790 out.go:285] * 
	* 
	W1216 06:19:09.558631 1609790 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 06:19:09.561797 1609790 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-142606 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-142606 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-142606 addons disable ingress --alsologtostderr -v=1: exit status 11 (264.153036ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 06:19:09.623004 1609903 out.go:360] Setting OutFile to fd 1 ...
	I1216 06:19:09.623756 1609903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:19:09.623790 1609903 out.go:374] Setting ErrFile to fd 2...
	I1216 06:19:09.623822 1609903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:19:09.624246 1609903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 06:19:09.624645 1609903 mustload.go:66] Loading cluster: addons-142606
	I1216 06:19:09.625125 1609903 config.go:182] Loaded profile config "addons-142606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 06:19:09.626250 1609903 addons.go:622] checking whether the cluster is paused
	I1216 06:19:09.626457 1609903 config.go:182] Loaded profile config "addons-142606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 06:19:09.626479 1609903 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:19:09.627104 1609903 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:19:09.646790 1609903 ssh_runner.go:195] Run: systemctl --version
	I1216 06:19:09.646854 1609903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:19:09.665770 1609903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:19:09.759138 1609903 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 06:19:09.759224 1609903 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 06:19:09.796986 1609903 cri.go:89] found id: "6fb9a4bfa9caaebd212421ecc1cc702035bc2af38f3036ac5ca6c8fe0e994e82"
	I1216 06:19:09.797009 1609903 cri.go:89] found id: "6703e84bcca40b0a594cdf475f863c47388bb3473689dc6dc9131665ff15c722"
	I1216 06:19:09.797015 1609903 cri.go:89] found id: "88067168cfc835ec13a085c3301fa7a0945be84d1b32a45d6272d448705a93b4"
	I1216 06:19:09.797019 1609903 cri.go:89] found id: "6731eaf9efe44a3841c9077eb710e80717cb11a2633c9eb508ffa19f6164b80b"
	I1216 06:19:09.797022 1609903 cri.go:89] found id: "0fee244cfec70855196eca6cad232f4f73eacca15c942ffd069d11069d1f4cb4"
	I1216 06:19:09.797026 1609903 cri.go:89] found id: "28c54e5bde7563267d05bd0f6de8f8354c40960cc47cbfeddccee412b7fe46cb"
	I1216 06:19:09.797029 1609903 cri.go:89] found id: "165110b3c17520e8c6e3174b457d4b772ec69f7de1b240977205602417ac9de3"
	I1216 06:19:09.797033 1609903 cri.go:89] found id: "c5f817c74f04cb31c27fa9bc66b75d3c6e1e311d53f849b092257a1349eaad01"
	I1216 06:19:09.797036 1609903 cri.go:89] found id: "0d582f614e063962306cbbdc21f8d638fb8050f967014d21a8f487be01601d41"
	I1216 06:19:09.797050 1609903 cri.go:89] found id: "7818fd4ffad1eb8a31a4b1ce98a30ce679666d9ced58f83c7d6b7fee8bc7af95"
	I1216 06:19:09.797061 1609903 cri.go:89] found id: "a433ea848c0b6c9e22d66f89f96c00d59764c3b355eeeff67fb834e499878a37"
	I1216 06:19:09.797065 1609903 cri.go:89] found id: "161c43bb0c1f09a93b4ab4d3dc787f8c4cc654b82a0e5d755a7b156616025ca2"
	I1216 06:19:09.797068 1609903 cri.go:89] found id: "8abe529e41335b3b05795d10efae252a394efe05ef733112fb2247a1e4fd1c92"
	I1216 06:19:09.797072 1609903 cri.go:89] found id: "a9c90654843486bc32f90e8f985ddbacf205a101da2c3dd5ca31b342bfe712a4"
	I1216 06:19:09.797079 1609903 cri.go:89] found id: "c26c874f20adaf4ff3f5bebc23a86ae73f7dd3cecb90066667bc3a5d3a0c6c19"
	I1216 06:19:09.797085 1609903 cri.go:89] found id: "ce1c26a19229b31cbab4c845e9e6724afb7c05777e5212c27130f447ee267f04"
	I1216 06:19:09.797088 1609903 cri.go:89] found id: "05deafa86a4775c771a7a4f91648e7d0bbbde3e86afb8ded084631f76dadb3ea"
	I1216 06:19:09.797094 1609903 cri.go:89] found id: "3ba24e9ad28c6c3240f0dfc5f6682f61f94d490a15253cca8ed8af56ecef50b8"
	I1216 06:19:09.797097 1609903 cri.go:89] found id: "420ec82bb10934672e188d3b5c75b4015e4ddff1993a56b334897407409b4e9b"
	I1216 06:19:09.797100 1609903 cri.go:89] found id: "200b85d246fd05ce6515e2957bed839b1cd509fbd6973e7fb7b76cfc92dc0e92"
	I1216 06:19:09.797105 1609903 cri.go:89] found id: "df77467f393ab9f56a05b6bda0282ec85b78e7479554e9a66b909f66844386c1"
	I1216 06:19:09.797108 1609903 cri.go:89] found id: "579811cebcc8368d345e24f90d842a2c3691b61c760bea541d93287864a6257a"
	I1216 06:19:09.797112 1609903 cri.go:89] found id: "f245307e594fbb88a44a0deec519111b1a88c9ff3bfc81884eb0fff4916d96b2"
	I1216 06:19:09.797115 1609903 cri.go:89] found id: "c9eb26e694306fb2badad1b156e8c43cd7669aeea899bdaf4f5005d8c36ce56e"
	I1216 06:19:09.797119 1609903 cri.go:89] found id: ""
	I1216 06:19:09.797169 1609903 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 06:19:09.812100 1609903 out.go:203] 
	W1216 06:19:09.814973 1609903 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T06:19:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T06:19:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 06:19:09.814999 1609903 out.go:285] * 
	* 
	W1216 06:19:09.822547 1609903 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 06:19:09.825702 1609903 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-142606 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (145.55s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-gr47j" [334e70b6-c034-434e-b111-34ac0cf28c9c] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003276506s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-142606 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-142606 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (259.841921ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 06:17:33.253947 1608715 out.go:360] Setting OutFile to fd 1 ...
	I1216 06:17:33.255228 1608715 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:17:33.255276 1608715 out.go:374] Setting ErrFile to fd 2...
	I1216 06:17:33.255298 1608715 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:17:33.255627 1608715 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 06:17:33.255986 1608715 mustload.go:66] Loading cluster: addons-142606
	I1216 06:17:33.256431 1608715 config.go:182] Loaded profile config "addons-142606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 06:17:33.256511 1608715 addons.go:622] checking whether the cluster is paused
	I1216 06:17:33.256666 1608715 config.go:182] Loaded profile config "addons-142606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 06:17:33.256702 1608715 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:17:33.257271 1608715 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:17:33.275196 1608715 ssh_runner.go:195] Run: systemctl --version
	I1216 06:17:33.275276 1608715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:17:33.293634 1608715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:17:33.391421 1608715 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 06:17:33.391504 1608715 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 06:17:33.420383 1608715 cri.go:89] found id: "6703e84bcca40b0a594cdf475f863c47388bb3473689dc6dc9131665ff15c722"
	I1216 06:17:33.420409 1608715 cri.go:89] found id: "88067168cfc835ec13a085c3301fa7a0945be84d1b32a45d6272d448705a93b4"
	I1216 06:17:33.420415 1608715 cri.go:89] found id: "6731eaf9efe44a3841c9077eb710e80717cb11a2633c9eb508ffa19f6164b80b"
	I1216 06:17:33.420419 1608715 cri.go:89] found id: "0fee244cfec70855196eca6cad232f4f73eacca15c942ffd069d11069d1f4cb4"
	I1216 06:17:33.420422 1608715 cri.go:89] found id: "28c54e5bde7563267d05bd0f6de8f8354c40960cc47cbfeddccee412b7fe46cb"
	I1216 06:17:33.420426 1608715 cri.go:89] found id: "165110b3c17520e8c6e3174b457d4b772ec69f7de1b240977205602417ac9de3"
	I1216 06:17:33.420451 1608715 cri.go:89] found id: "c5f817c74f04cb31c27fa9bc66b75d3c6e1e311d53f849b092257a1349eaad01"
	I1216 06:17:33.420462 1608715 cri.go:89] found id: "0d582f614e063962306cbbdc21f8d638fb8050f967014d21a8f487be01601d41"
	I1216 06:17:33.420502 1608715 cri.go:89] found id: "7818fd4ffad1eb8a31a4b1ce98a30ce679666d9ced58f83c7d6b7fee8bc7af95"
	I1216 06:17:33.420509 1608715 cri.go:89] found id: "a433ea848c0b6c9e22d66f89f96c00d59764c3b355eeeff67fb834e499878a37"
	I1216 06:17:33.420512 1608715 cri.go:89] found id: "161c43bb0c1f09a93b4ab4d3dc787f8c4cc654b82a0e5d755a7b156616025ca2"
	I1216 06:17:33.420515 1608715 cri.go:89] found id: "8abe529e41335b3b05795d10efae252a394efe05ef733112fb2247a1e4fd1c92"
	I1216 06:17:33.420543 1608715 cri.go:89] found id: "a9c90654843486bc32f90e8f985ddbacf205a101da2c3dd5ca31b342bfe712a4"
	I1216 06:17:33.420555 1608715 cri.go:89] found id: "c26c874f20adaf4ff3f5bebc23a86ae73f7dd3cecb90066667bc3a5d3a0c6c19"
	I1216 06:17:33.420559 1608715 cri.go:89] found id: "ce1c26a19229b31cbab4c845e9e6724afb7c05777e5212c27130f447ee267f04"
	I1216 06:17:33.420564 1608715 cri.go:89] found id: "05deafa86a4775c771a7a4f91648e7d0bbbde3e86afb8ded084631f76dadb3ea"
	I1216 06:17:33.420567 1608715 cri.go:89] found id: "3ba24e9ad28c6c3240f0dfc5f6682f61f94d490a15253cca8ed8af56ecef50b8"
	I1216 06:17:33.420571 1608715 cri.go:89] found id: "420ec82bb10934672e188d3b5c75b4015e4ddff1993a56b334897407409b4e9b"
	I1216 06:17:33.420574 1608715 cri.go:89] found id: "200b85d246fd05ce6515e2957bed839b1cd509fbd6973e7fb7b76cfc92dc0e92"
	I1216 06:17:33.420577 1608715 cri.go:89] found id: "df77467f393ab9f56a05b6bda0282ec85b78e7479554e9a66b909f66844386c1"
	I1216 06:17:33.420580 1608715 cri.go:89] found id: "579811cebcc8368d345e24f90d842a2c3691b61c760bea541d93287864a6257a"
	I1216 06:17:33.420591 1608715 cri.go:89] found id: "f245307e594fbb88a44a0deec519111b1a88c9ff3bfc81884eb0fff4916d96b2"
	I1216 06:17:33.420594 1608715 cri.go:89] found id: "c9eb26e694306fb2badad1b156e8c43cd7669aeea899bdaf4f5005d8c36ce56e"
	I1216 06:17:33.420597 1608715 cri.go:89] found id: ""
	I1216 06:17:33.420666 1608715 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 06:17:33.436819 1608715 out.go:203] 
	W1216 06:17:33.439725 1608715 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T06:17:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T06:17:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 06:17:33.439766 1608715 out.go:285] * 
	* 
	W1216 06:17:33.447151 1608715 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 06:17:33.449917 1608715 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-142606 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.26s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.39s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 3.462969ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-t6mbs" [8d8eeda6-d813-458b-807c-e88c9a1a0462] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004731392s
addons_test.go:465: (dbg) Run:  kubectl --context addons-142606 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-142606 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-142606 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (285.365869ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 06:16:44.066113 1607622 out.go:360] Setting OutFile to fd 1 ...
	I1216 06:16:44.067502 1607622 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:16:44.067578 1607622 out.go:374] Setting ErrFile to fd 2...
	I1216 06:16:44.067628 1607622 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:16:44.068120 1607622 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 06:16:44.068770 1607622 mustload.go:66] Loading cluster: addons-142606
	I1216 06:16:44.069373 1607622 config.go:182] Loaded profile config "addons-142606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 06:16:44.069435 1607622 addons.go:622] checking whether the cluster is paused
	I1216 06:16:44.069626 1607622 config.go:182] Loaded profile config "addons-142606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 06:16:44.069680 1607622 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:16:44.070247 1607622 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:16:44.088300 1607622 ssh_runner.go:195] Run: systemctl --version
	I1216 06:16:44.088356 1607622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:16:44.106693 1607622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:16:44.214036 1607622 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 06:16:44.214140 1607622 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 06:16:44.247323 1607622 cri.go:89] found id: "6703e84bcca40b0a594cdf475f863c47388bb3473689dc6dc9131665ff15c722"
	I1216 06:16:44.247350 1607622 cri.go:89] found id: "88067168cfc835ec13a085c3301fa7a0945be84d1b32a45d6272d448705a93b4"
	I1216 06:16:44.247356 1607622 cri.go:89] found id: "6731eaf9efe44a3841c9077eb710e80717cb11a2633c9eb508ffa19f6164b80b"
	I1216 06:16:44.247360 1607622 cri.go:89] found id: "0fee244cfec70855196eca6cad232f4f73eacca15c942ffd069d11069d1f4cb4"
	I1216 06:16:44.247363 1607622 cri.go:89] found id: "28c54e5bde7563267d05bd0f6de8f8354c40960cc47cbfeddccee412b7fe46cb"
	I1216 06:16:44.247368 1607622 cri.go:89] found id: "165110b3c17520e8c6e3174b457d4b772ec69f7de1b240977205602417ac9de3"
	I1216 06:16:44.247371 1607622 cri.go:89] found id: "c5f817c74f04cb31c27fa9bc66b75d3c6e1e311d53f849b092257a1349eaad01"
	I1216 06:16:44.247375 1607622 cri.go:89] found id: "0d582f614e063962306cbbdc21f8d638fb8050f967014d21a8f487be01601d41"
	I1216 06:16:44.247378 1607622 cri.go:89] found id: "7818fd4ffad1eb8a31a4b1ce98a30ce679666d9ced58f83c7d6b7fee8bc7af95"
	I1216 06:16:44.247386 1607622 cri.go:89] found id: "a433ea848c0b6c9e22d66f89f96c00d59764c3b355eeeff67fb834e499878a37"
	I1216 06:16:44.247389 1607622 cri.go:89] found id: "161c43bb0c1f09a93b4ab4d3dc787f8c4cc654b82a0e5d755a7b156616025ca2"
	I1216 06:16:44.247392 1607622 cri.go:89] found id: "8abe529e41335b3b05795d10efae252a394efe05ef733112fb2247a1e4fd1c92"
	I1216 06:16:44.247397 1607622 cri.go:89] found id: "a9c90654843486bc32f90e8f985ddbacf205a101da2c3dd5ca31b342bfe712a4"
	I1216 06:16:44.247400 1607622 cri.go:89] found id: "c26c874f20adaf4ff3f5bebc23a86ae73f7dd3cecb90066667bc3a5d3a0c6c19"
	I1216 06:16:44.247405 1607622 cri.go:89] found id: "ce1c26a19229b31cbab4c845e9e6724afb7c05777e5212c27130f447ee267f04"
	I1216 06:16:44.247419 1607622 cri.go:89] found id: "05deafa86a4775c771a7a4f91648e7d0bbbde3e86afb8ded084631f76dadb3ea"
	I1216 06:16:44.247423 1607622 cri.go:89] found id: "3ba24e9ad28c6c3240f0dfc5f6682f61f94d490a15253cca8ed8af56ecef50b8"
	I1216 06:16:44.247452 1607622 cri.go:89] found id: "420ec82bb10934672e188d3b5c75b4015e4ddff1993a56b334897407409b4e9b"
	I1216 06:16:44.247460 1607622 cri.go:89] found id: "200b85d246fd05ce6515e2957bed839b1cd509fbd6973e7fb7b76cfc92dc0e92"
	I1216 06:16:44.247464 1607622 cri.go:89] found id: "df77467f393ab9f56a05b6bda0282ec85b78e7479554e9a66b909f66844386c1"
	I1216 06:16:44.247469 1607622 cri.go:89] found id: "579811cebcc8368d345e24f90d842a2c3691b61c760bea541d93287864a6257a"
	I1216 06:16:44.247475 1607622 cri.go:89] found id: "f245307e594fbb88a44a0deec519111b1a88c9ff3bfc81884eb0fff4916d96b2"
	I1216 06:16:44.247478 1607622 cri.go:89] found id: "c9eb26e694306fb2badad1b156e8c43cd7669aeea899bdaf4f5005d8c36ce56e"
	I1216 06:16:44.247481 1607622 cri.go:89] found id: ""
	I1216 06:16:44.247549 1607622 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 06:16:44.263205 1607622 out.go:203] 
	W1216 06:16:44.266107 1607622 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T06:16:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T06:16:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 06:16:44.266137 1607622 out.go:285] * 
	* 
	W1216 06:16:44.273388 1607622 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 06:16:44.276293 1607622 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-142606 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.39s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1216 06:16:42.952365 1599255 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1216 06:16:42.959327 1599255 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1216 06:16:42.959360 1599255 kapi.go:107] duration metric: took 13.295064ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 13.307273ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-142606 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-142606 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-142606 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-142606 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-142606 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-142606 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-142606 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-142606 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-142606 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-142606 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-142606 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-142606 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-142606 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [efcd7069-480d-4a0f-90cd-30f3a4a78ac0] Pending
helpers_test.go:353: "task-pv-pod" [efcd7069-480d-4a0f-90cd-30f3a4a78ac0] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 6.003556106s
addons_test.go:574: (dbg) Run:  kubectl --context addons-142606 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-142606 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-142606 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-142606 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-142606 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-142606 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-142606 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-142606 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-142606 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-142606 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-142606 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-142606 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-142606 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-142606 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-142606 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-142606 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-142606 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-142606 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-142606 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-142606 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-142606 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-142606 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-142606 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-142606 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-142606 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [2399f1d1-dae3-43b2-8403-28cbc7343861] Pending
helpers_test.go:353: "task-pv-pod-restore" [2399f1d1-dae3-43b2-8403-28cbc7343861] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [2399f1d1-dae3-43b2-8403-28cbc7343861] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003617127s
addons_test.go:616: (dbg) Run:  kubectl --context addons-142606 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-142606 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-142606 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-142606 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-142606 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (294.094677ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 06:17:26.684705 1608600 out.go:360] Setting OutFile to fd 1 ...
	I1216 06:17:26.685539 1608600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:17:26.685555 1608600 out.go:374] Setting ErrFile to fd 2...
	I1216 06:17:26.685562 1608600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:17:26.685900 1608600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 06:17:26.686242 1608600 mustload.go:66] Loading cluster: addons-142606
	I1216 06:17:26.686668 1608600 config.go:182] Loaded profile config "addons-142606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 06:17:26.686690 1608600 addons.go:622] checking whether the cluster is paused
	I1216 06:17:26.686838 1608600 config.go:182] Loaded profile config "addons-142606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 06:17:26.686858 1608600 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:17:26.687545 1608600 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:17:26.708367 1608600 ssh_runner.go:195] Run: systemctl --version
	I1216 06:17:26.708437 1608600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:17:26.738138 1608600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:17:26.846955 1608600 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 06:17:26.847053 1608600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 06:17:26.876450 1608600 cri.go:89] found id: "6703e84bcca40b0a594cdf475f863c47388bb3473689dc6dc9131665ff15c722"
	I1216 06:17:26.876499 1608600 cri.go:89] found id: "88067168cfc835ec13a085c3301fa7a0945be84d1b32a45d6272d448705a93b4"
	I1216 06:17:26.876506 1608600 cri.go:89] found id: "6731eaf9efe44a3841c9077eb710e80717cb11a2633c9eb508ffa19f6164b80b"
	I1216 06:17:26.876510 1608600 cri.go:89] found id: "0fee244cfec70855196eca6cad232f4f73eacca15c942ffd069d11069d1f4cb4"
	I1216 06:17:26.876513 1608600 cri.go:89] found id: "28c54e5bde7563267d05bd0f6de8f8354c40960cc47cbfeddccee412b7fe46cb"
	I1216 06:17:26.876516 1608600 cri.go:89] found id: "165110b3c17520e8c6e3174b457d4b772ec69f7de1b240977205602417ac9de3"
	I1216 06:17:26.876520 1608600 cri.go:89] found id: "c5f817c74f04cb31c27fa9bc66b75d3c6e1e311d53f849b092257a1349eaad01"
	I1216 06:17:26.876529 1608600 cri.go:89] found id: "0d582f614e063962306cbbdc21f8d638fb8050f967014d21a8f487be01601d41"
	I1216 06:17:26.876532 1608600 cri.go:89] found id: "7818fd4ffad1eb8a31a4b1ce98a30ce679666d9ced58f83c7d6b7fee8bc7af95"
	I1216 06:17:26.876551 1608600 cri.go:89] found id: "a433ea848c0b6c9e22d66f89f96c00d59764c3b355eeeff67fb834e499878a37"
	I1216 06:17:26.876555 1608600 cri.go:89] found id: "161c43bb0c1f09a93b4ab4d3dc787f8c4cc654b82a0e5d755a7b156616025ca2"
	I1216 06:17:26.876558 1608600 cri.go:89] found id: "8abe529e41335b3b05795d10efae252a394efe05ef733112fb2247a1e4fd1c92"
	I1216 06:17:26.876561 1608600 cri.go:89] found id: "a9c90654843486bc32f90e8f985ddbacf205a101da2c3dd5ca31b342bfe712a4"
	I1216 06:17:26.876572 1608600 cri.go:89] found id: "c26c874f20adaf4ff3f5bebc23a86ae73f7dd3cecb90066667bc3a5d3a0c6c19"
	I1216 06:17:26.876575 1608600 cri.go:89] found id: "ce1c26a19229b31cbab4c845e9e6724afb7c05777e5212c27130f447ee267f04"
	I1216 06:17:26.876584 1608600 cri.go:89] found id: "05deafa86a4775c771a7a4f91648e7d0bbbde3e86afb8ded084631f76dadb3ea"
	I1216 06:17:26.876594 1608600 cri.go:89] found id: "3ba24e9ad28c6c3240f0dfc5f6682f61f94d490a15253cca8ed8af56ecef50b8"
	I1216 06:17:26.876602 1608600 cri.go:89] found id: "420ec82bb10934672e188d3b5c75b4015e4ddff1993a56b334897407409b4e9b"
	I1216 06:17:26.876606 1608600 cri.go:89] found id: "200b85d246fd05ce6515e2957bed839b1cd509fbd6973e7fb7b76cfc92dc0e92"
	I1216 06:17:26.876609 1608600 cri.go:89] found id: "df77467f393ab9f56a05b6bda0282ec85b78e7479554e9a66b909f66844386c1"
	I1216 06:17:26.876618 1608600 cri.go:89] found id: "579811cebcc8368d345e24f90d842a2c3691b61c760bea541d93287864a6257a"
	I1216 06:17:26.876621 1608600 cri.go:89] found id: "f245307e594fbb88a44a0deec519111b1a88c9ff3bfc81884eb0fff4916d96b2"
	I1216 06:17:26.876624 1608600 cri.go:89] found id: "c9eb26e694306fb2badad1b156e8c43cd7669aeea899bdaf4f5005d8c36ce56e"
	I1216 06:17:26.876627 1608600 cri.go:89] found id: ""
	I1216 06:17:26.876699 1608600 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 06:17:26.903672 1608600 out.go:203] 
	W1216 06:17:26.906532 1608600 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T06:17:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T06:17:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 06:17:26.906560 1608600 out.go:285] * 
	* 
	W1216 06:17:26.913912 1608600 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 06:17:26.916859 1608600 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-142606 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-142606 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-142606 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (268.36101ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 06:17:26.979510 1608653 out.go:360] Setting OutFile to fd 1 ...
	I1216 06:17:26.980416 1608653 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:17:26.980462 1608653 out.go:374] Setting ErrFile to fd 2...
	I1216 06:17:26.980496 1608653 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:17:26.980811 1608653 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 06:17:26.981159 1608653 mustload.go:66] Loading cluster: addons-142606
	I1216 06:17:26.981584 1608653 config.go:182] Loaded profile config "addons-142606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 06:17:26.981631 1608653 addons.go:622] checking whether the cluster is paused
	I1216 06:17:26.981770 1608653 config.go:182] Loaded profile config "addons-142606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 06:17:26.981807 1608653 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:17:26.982336 1608653 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:17:27.000968 1608653 ssh_runner.go:195] Run: systemctl --version
	I1216 06:17:27.001022 1608653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:17:27.021997 1608653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:17:27.118935 1608653 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 06:17:27.119032 1608653 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 06:17:27.155641 1608653 cri.go:89] found id: "6703e84bcca40b0a594cdf475f863c47388bb3473689dc6dc9131665ff15c722"
	I1216 06:17:27.155662 1608653 cri.go:89] found id: "88067168cfc835ec13a085c3301fa7a0945be84d1b32a45d6272d448705a93b4"
	I1216 06:17:27.155667 1608653 cri.go:89] found id: "6731eaf9efe44a3841c9077eb710e80717cb11a2633c9eb508ffa19f6164b80b"
	I1216 06:17:27.155672 1608653 cri.go:89] found id: "0fee244cfec70855196eca6cad232f4f73eacca15c942ffd069d11069d1f4cb4"
	I1216 06:17:27.155675 1608653 cri.go:89] found id: "28c54e5bde7563267d05bd0f6de8f8354c40960cc47cbfeddccee412b7fe46cb"
	I1216 06:17:27.155679 1608653 cri.go:89] found id: "165110b3c17520e8c6e3174b457d4b772ec69f7de1b240977205602417ac9de3"
	I1216 06:17:27.155682 1608653 cri.go:89] found id: "c5f817c74f04cb31c27fa9bc66b75d3c6e1e311d53f849b092257a1349eaad01"
	I1216 06:17:27.155686 1608653 cri.go:89] found id: "0d582f614e063962306cbbdc21f8d638fb8050f967014d21a8f487be01601d41"
	I1216 06:17:27.155689 1608653 cri.go:89] found id: "7818fd4ffad1eb8a31a4b1ce98a30ce679666d9ced58f83c7d6b7fee8bc7af95"
	I1216 06:17:27.155696 1608653 cri.go:89] found id: "a433ea848c0b6c9e22d66f89f96c00d59764c3b355eeeff67fb834e499878a37"
	I1216 06:17:27.155699 1608653 cri.go:89] found id: "161c43bb0c1f09a93b4ab4d3dc787f8c4cc654b82a0e5d755a7b156616025ca2"
	I1216 06:17:27.155703 1608653 cri.go:89] found id: "8abe529e41335b3b05795d10efae252a394efe05ef733112fb2247a1e4fd1c92"
	I1216 06:17:27.155707 1608653 cri.go:89] found id: "a9c90654843486bc32f90e8f985ddbacf205a101da2c3dd5ca31b342bfe712a4"
	I1216 06:17:27.155711 1608653 cri.go:89] found id: "c26c874f20adaf4ff3f5bebc23a86ae73f7dd3cecb90066667bc3a5d3a0c6c19"
	I1216 06:17:27.155714 1608653 cri.go:89] found id: "ce1c26a19229b31cbab4c845e9e6724afb7c05777e5212c27130f447ee267f04"
	I1216 06:17:27.155726 1608653 cri.go:89] found id: "05deafa86a4775c771a7a4f91648e7d0bbbde3e86afb8ded084631f76dadb3ea"
	I1216 06:17:27.155754 1608653 cri.go:89] found id: "3ba24e9ad28c6c3240f0dfc5f6682f61f94d490a15253cca8ed8af56ecef50b8"
	I1216 06:17:27.155763 1608653 cri.go:89] found id: "420ec82bb10934672e188d3b5c75b4015e4ddff1993a56b334897407409b4e9b"
	I1216 06:17:27.155766 1608653 cri.go:89] found id: "200b85d246fd05ce6515e2957bed839b1cd509fbd6973e7fb7b76cfc92dc0e92"
	I1216 06:17:27.155770 1608653 cri.go:89] found id: "df77467f393ab9f56a05b6bda0282ec85b78e7479554e9a66b909f66844386c1"
	I1216 06:17:27.155799 1608653 cri.go:89] found id: "579811cebcc8368d345e24f90d842a2c3691b61c760bea541d93287864a6257a"
	I1216 06:17:27.155809 1608653 cri.go:89] found id: "f245307e594fbb88a44a0deec519111b1a88c9ff3bfc81884eb0fff4916d96b2"
	I1216 06:17:27.155816 1608653 cri.go:89] found id: "c9eb26e694306fb2badad1b156e8c43cd7669aeea899bdaf4f5005d8c36ce56e"
	I1216 06:17:27.155818 1608653 cri.go:89] found id: ""
	I1216 06:17:27.155869 1608653 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 06:17:27.170594 1608653 out.go:203] 
	W1216 06:17:27.173560 1608653 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T06:17:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T06:17:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 06:17:27.173596 1608653 out.go:285] * 
	* 
	W1216 06:17:27.181030 1608653 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 06:17:27.183835 1608653 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-142606 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (44.25s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-142606 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-142606 --alsologtostderr -v=1: exit status 11 (262.350182ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 06:16:17.661764 1606441 out.go:360] Setting OutFile to fd 1 ...
	I1216 06:16:17.663011 1606441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:16:17.663026 1606441 out.go:374] Setting ErrFile to fd 2...
	I1216 06:16:17.663032 1606441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:16:17.663327 1606441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 06:16:17.663674 1606441 mustload.go:66] Loading cluster: addons-142606
	I1216 06:16:17.664080 1606441 config.go:182] Loaded profile config "addons-142606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 06:16:17.664100 1606441 addons.go:622] checking whether the cluster is paused
	I1216 06:16:17.664211 1606441 config.go:182] Loaded profile config "addons-142606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 06:16:17.664227 1606441 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:16:17.664768 1606441 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:16:17.682609 1606441 ssh_runner.go:195] Run: systemctl --version
	I1216 06:16:17.682676 1606441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:16:17.701309 1606441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:16:17.799106 1606441 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 06:16:17.799190 1606441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 06:16:17.828593 1606441 cri.go:89] found id: "6703e84bcca40b0a594cdf475f863c47388bb3473689dc6dc9131665ff15c722"
	I1216 06:16:17.828667 1606441 cri.go:89] found id: "88067168cfc835ec13a085c3301fa7a0945be84d1b32a45d6272d448705a93b4"
	I1216 06:16:17.828690 1606441 cri.go:89] found id: "6731eaf9efe44a3841c9077eb710e80717cb11a2633c9eb508ffa19f6164b80b"
	I1216 06:16:17.828710 1606441 cri.go:89] found id: "0fee244cfec70855196eca6cad232f4f73eacca15c942ffd069d11069d1f4cb4"
	I1216 06:16:17.828744 1606441 cri.go:89] found id: "28c54e5bde7563267d05bd0f6de8f8354c40960cc47cbfeddccee412b7fe46cb"
	I1216 06:16:17.828770 1606441 cri.go:89] found id: "165110b3c17520e8c6e3174b457d4b772ec69f7de1b240977205602417ac9de3"
	I1216 06:16:17.828788 1606441 cri.go:89] found id: "c5f817c74f04cb31c27fa9bc66b75d3c6e1e311d53f849b092257a1349eaad01"
	I1216 06:16:17.828820 1606441 cri.go:89] found id: "0d582f614e063962306cbbdc21f8d638fb8050f967014d21a8f487be01601d41"
	I1216 06:16:17.828844 1606441 cri.go:89] found id: "7818fd4ffad1eb8a31a4b1ce98a30ce679666d9ced58f83c7d6b7fee8bc7af95"
	I1216 06:16:17.828868 1606441 cri.go:89] found id: "a433ea848c0b6c9e22d66f89f96c00d59764c3b355eeeff67fb834e499878a37"
	I1216 06:16:17.828901 1606441 cri.go:89] found id: "161c43bb0c1f09a93b4ab4d3dc787f8c4cc654b82a0e5d755a7b156616025ca2"
	I1216 06:16:17.828919 1606441 cri.go:89] found id: "8abe529e41335b3b05795d10efae252a394efe05ef733112fb2247a1e4fd1c92"
	I1216 06:16:17.828939 1606441 cri.go:89] found id: "a9c90654843486bc32f90e8f985ddbacf205a101da2c3dd5ca31b342bfe712a4"
	I1216 06:16:17.828961 1606441 cri.go:89] found id: "c26c874f20adaf4ff3f5bebc23a86ae73f7dd3cecb90066667bc3a5d3a0c6c19"
	I1216 06:16:17.828991 1606441 cri.go:89] found id: "ce1c26a19229b31cbab4c845e9e6724afb7c05777e5212c27130f447ee267f04"
	I1216 06:16:17.829017 1606441 cri.go:89] found id: "05deafa86a4775c771a7a4f91648e7d0bbbde3e86afb8ded084631f76dadb3ea"
	I1216 06:16:17.829045 1606441 cri.go:89] found id: "3ba24e9ad28c6c3240f0dfc5f6682f61f94d490a15253cca8ed8af56ecef50b8"
	I1216 06:16:17.829076 1606441 cri.go:89] found id: "420ec82bb10934672e188d3b5c75b4015e4ddff1993a56b334897407409b4e9b"
	I1216 06:16:17.829093 1606441 cri.go:89] found id: "200b85d246fd05ce6515e2957bed839b1cd509fbd6973e7fb7b76cfc92dc0e92"
	I1216 06:16:17.829112 1606441 cri.go:89] found id: "df77467f393ab9f56a05b6bda0282ec85b78e7479554e9a66b909f66844386c1"
	I1216 06:16:17.829148 1606441 cri.go:89] found id: "579811cebcc8368d345e24f90d842a2c3691b61c760bea541d93287864a6257a"
	I1216 06:16:17.829175 1606441 cri.go:89] found id: "f245307e594fbb88a44a0deec519111b1a88c9ff3bfc81884eb0fff4916d96b2"
	I1216 06:16:17.829196 1606441 cri.go:89] found id: "c9eb26e694306fb2badad1b156e8c43cd7669aeea899bdaf4f5005d8c36ce56e"
	I1216 06:16:17.829228 1606441 cri.go:89] found id: ""
	I1216 06:16:17.829311 1606441 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 06:16:17.844976 1606441 out.go:203] 
	W1216 06:16:17.847946 1606441 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T06:16:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T06:16:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 06:16:17.847971 1606441 out.go:285] * 
	* 
	W1216 06:16:17.855316 1606441 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 06:16:17.858389 1606441 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-142606 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-142606
helpers_test.go:244: (dbg) docker inspect addons-142606:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bf001fc7b73962d265e364d5dc0a0431f53d593dbeb39b7f63abe2349353c726",
	        "Created": "2025-12-16T06:13:40.999815489Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1600649,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T06:13:41.068786507Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/bf001fc7b73962d265e364d5dc0a0431f53d593dbeb39b7f63abe2349353c726/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bf001fc7b73962d265e364d5dc0a0431f53d593dbeb39b7f63abe2349353c726/hostname",
	        "HostsPath": "/var/lib/docker/containers/bf001fc7b73962d265e364d5dc0a0431f53d593dbeb39b7f63abe2349353c726/hosts",
	        "LogPath": "/var/lib/docker/containers/bf001fc7b73962d265e364d5dc0a0431f53d593dbeb39b7f63abe2349353c726/bf001fc7b73962d265e364d5dc0a0431f53d593dbeb39b7f63abe2349353c726-json.log",
	        "Name": "/addons-142606",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-142606:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-142606",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bf001fc7b73962d265e364d5dc0a0431f53d593dbeb39b7f63abe2349353c726",
	                "LowerDir": "/var/lib/docker/overlay2/287315b8b7ed6b8475b7a96b373d7b2b829ce5fd0faa6eca67651cbe6bd9badf-init/diff:/var/lib/docker/overlay2/bf9e5e3f04a34ae52d17b5e81aeacb3854428b2bda7b4fcb7e1d86558db759ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/287315b8b7ed6b8475b7a96b373d7b2b829ce5fd0faa6eca67651cbe6bd9badf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/287315b8b7ed6b8475b7a96b373d7b2b829ce5fd0faa6eca67651cbe6bd9badf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/287315b8b7ed6b8475b7a96b373d7b2b829ce5fd0faa6eca67651cbe6bd9badf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-142606",
	                "Source": "/var/lib/docker/volumes/addons-142606/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-142606",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-142606",
	                "name.minikube.sigs.k8s.io": "addons-142606",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7a54c8801a155c7108fb424e85c2dd89dbbbe83437dfab238ac7e6a5ec1147ca",
	            "SandboxKey": "/var/run/docker/netns/7a54c8801a15",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34245"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34246"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34249"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34247"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34248"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-142606": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:e9:58:62:05:52",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "234aad51cbf5e49e54b2e21134f415ba87de220494e4f6151e070cebaa7dbe13",
	                    "EndpointID": "e6041f18e5257b5e5386a97f0516adb8883cac32326cd3c81566d2a8f70b1315",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-142606",
	                        "bf001fc7b739"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-142606 -n addons-142606
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p addons-142606 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p addons-142606 logs -n 25: (1.441231847s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-971616 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-971616   │ jenkins │ v1.37.0 │ 16 Dec 25 06:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ delete  │ -p download-only-971616                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-971616   │ jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ start   │ -o=json --download-only -p download-only-783122 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-783122   │ jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ delete  │ -p download-only-783122                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-783122   │ jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ start   │ -o=json --download-only -p download-only-352125 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                         │ download-only-352125   │ jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ delete  │ -p download-only-352125                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-352125   │ jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ delete  │ -p download-only-971616                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-971616   │ jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ delete  │ -p download-only-783122                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-783122   │ jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ delete  │ -p download-only-352125                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-352125   │ jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ start   │ --download-only -p download-docker-840918 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-840918 │ jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │                     │
	│ delete  │ -p download-docker-840918                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-840918 │ jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ start   │ --download-only -p binary-mirror-707915 --alsologtostderr --binary-mirror http://127.0.0.1:38553 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-707915   │ jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │                     │
	│ delete  │ -p binary-mirror-707915                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-707915   │ jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ addons  │ enable dashboard -p addons-142606                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-142606          │ jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │                     │
	│ addons  │ disable dashboard -p addons-142606                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-142606          │ jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │                     │
	│ start   │ -p addons-142606 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-142606          │ jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:16 UTC │
	│ addons  │ addons-142606 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-142606          │ jenkins │ v1.37.0 │ 16 Dec 25 06:16 UTC │                     │
	│ addons  │ addons-142606 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-142606          │ jenkins │ v1.37.0 │ 16 Dec 25 06:16 UTC │                     │
	│ addons  │ enable headlamp -p addons-142606 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-142606          │ jenkins │ v1.37.0 │ 16 Dec 25 06:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 06:13:15
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 06:13:15.957789 1600247 out.go:360] Setting OutFile to fd 1 ...
	I1216 06:13:15.957948 1600247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:13:15.958081 1600247 out.go:374] Setting ErrFile to fd 2...
	I1216 06:13:15.958092 1600247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:13:15.958409 1600247 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 06:13:15.958928 1600247 out.go:368] Setting JSON to false
	I1216 06:13:15.959774 1600247 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":32147,"bootTime":1765833449,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1216 06:13:15.959850 1600247 start.go:143] virtualization:  
	I1216 06:13:15.963591 1600247 out.go:179] * [addons-142606] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 06:13:15.967676 1600247 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 06:13:15.967838 1600247 notify.go:221] Checking for updates...
	I1216 06:13:15.974456 1600247 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 06:13:15.977588 1600247 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:13:15.980782 1600247 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube
	I1216 06:13:15.983903 1600247 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 06:13:15.987010 1600247 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 06:13:15.990282 1600247 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 06:13:16.032354 1600247 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 06:13:16.032520 1600247 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:13:16.088857 1600247 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-16 06:13:16.079088735 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 06:13:16.088962 1600247 docker.go:319] overlay module found
	I1216 06:13:16.092105 1600247 out.go:179] * Using the docker driver based on user configuration
	I1216 06:13:16.095059 1600247 start.go:309] selected driver: docker
	I1216 06:13:16.095089 1600247 start.go:927] validating driver "docker" against <nil>
	I1216 06:13:16.095103 1600247 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 06:13:16.095864 1600247 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:13:16.154674 1600247 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-16 06:13:16.145692237 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 06:13:16.154829 1600247 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 06:13:16.155047 1600247 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 06:13:16.157997 1600247 out.go:179] * Using Docker driver with root privileges
	I1216 06:13:16.160801 1600247 cni.go:84] Creating CNI manager for ""
	I1216 06:13:16.160873 1600247 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 06:13:16.160886 1600247 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 06:13:16.160962 1600247 start.go:353] cluster config:
	{Name:addons-142606 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-142606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1216 06:13:16.164155 1600247 out.go:179] * Starting "addons-142606" primary control-plane node in "addons-142606" cluster
	I1216 06:13:16.166915 1600247 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 06:13:16.169822 1600247 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 06:13:16.172567 1600247 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 06:13:16.172620 1600247 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1216 06:13:16.172640 1600247 cache.go:65] Caching tarball of preloaded images
	I1216 06:13:16.172665 1600247 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 06:13:16.172732 1600247 preload.go:238] Found /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 06:13:16.172743 1600247 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 06:13:16.173094 1600247 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/config.json ...
	I1216 06:13:16.173125 1600247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/config.json: {Name:mkdf2c59ee60ef020b4de8eb68942a1833c1c127 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:13:16.189582 1600247 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1216 06:13:16.189712 1600247 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory
	I1216 06:13:16.189738 1600247 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory, skipping pull
	I1216 06:13:16.189744 1600247 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in cache, skipping pull
	I1216 06:13:16.189751 1600247 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 as a tarball
	I1216 06:13:16.189760 1600247 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 from local cache
	I1216 06:13:34.624340 1600247 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 from cached tarball
	I1216 06:13:34.624401 1600247 cache.go:243] Successfully downloaded all kic artifacts
	I1216 06:13:34.624433 1600247 start.go:360] acquireMachinesLock for addons-142606: {Name:mk5d421a8bc03800bd0474a647fe31f4b3011418 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:13:34.624603 1600247 start.go:364] duration metric: took 145.052µs to acquireMachinesLock for "addons-142606"
	I1216 06:13:34.624644 1600247 start.go:93] Provisioning new machine with config: &{Name:addons-142606 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-142606 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 06:13:34.624720 1600247 start.go:125] createHost starting for "" (driver="docker")
	I1216 06:13:34.628195 1600247 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1216 06:13:34.628455 1600247 start.go:159] libmachine.API.Create for "addons-142606" (driver="docker")
	I1216 06:13:34.628509 1600247 client.go:173] LocalClient.Create starting
	I1216 06:13:34.628631 1600247 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem
	I1216 06:13:35.028113 1600247 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem
	I1216 06:13:35.123794 1600247 cli_runner.go:164] Run: docker network inspect addons-142606 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 06:13:35.140101 1600247 cli_runner.go:211] docker network inspect addons-142606 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 06:13:35.140202 1600247 network_create.go:284] running [docker network inspect addons-142606] to gather additional debugging logs...
	I1216 06:13:35.140225 1600247 cli_runner.go:164] Run: docker network inspect addons-142606
	W1216 06:13:35.155094 1600247 cli_runner.go:211] docker network inspect addons-142606 returned with exit code 1
	I1216 06:13:35.155129 1600247 network_create.go:287] error running [docker network inspect addons-142606]: docker network inspect addons-142606: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-142606 not found
	I1216 06:13:35.155143 1600247 network_create.go:289] output of [docker network inspect addons-142606]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-142606 not found
	
	** /stderr **
	I1216 06:13:35.155251 1600247 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 06:13:35.172235 1600247 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b14410}
	I1216 06:13:35.172278 1600247 network_create.go:124] attempt to create docker network addons-142606 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1216 06:13:35.172343 1600247 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-142606 addons-142606
	I1216 06:13:35.237080 1600247 network_create.go:108] docker network addons-142606 192.168.49.0/24 created
	I1216 06:13:35.237114 1600247 kic.go:121] calculated static IP "192.168.49.2" for the "addons-142606" container
	I1216 06:13:35.237193 1600247 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 06:13:35.256202 1600247 cli_runner.go:164] Run: docker volume create addons-142606 --label name.minikube.sigs.k8s.io=addons-142606 --label created_by.minikube.sigs.k8s.io=true
	I1216 06:13:35.275952 1600247 oci.go:103] Successfully created a docker volume addons-142606
	I1216 06:13:35.276064 1600247 cli_runner.go:164] Run: docker run --rm --name addons-142606-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-142606 --entrypoint /usr/bin/test -v addons-142606:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1216 06:13:36.942675 1600247 cli_runner.go:217] Completed: docker run --rm --name addons-142606-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-142606 --entrypoint /usr/bin/test -v addons-142606:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib: (1.666570189s)
	I1216 06:13:36.942722 1600247 oci.go:107] Successfully prepared a docker volume addons-142606
	I1216 06:13:36.942764 1600247 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 06:13:36.942777 1600247 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 06:13:36.942840 1600247 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-142606:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 06:13:40.927804 1600247 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-142606:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (3.984922902s)
	I1216 06:13:40.927838 1600247 kic.go:203] duration metric: took 3.985057967s to extract preloaded images to volume ...
	W1216 06:13:40.927996 1600247 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1216 06:13:40.928118 1600247 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 06:13:40.985038 1600247 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-142606 --name addons-142606 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-142606 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-142606 --network addons-142606 --ip 192.168.49.2 --volume addons-142606:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1216 06:13:41.297519 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Running}}
	I1216 06:13:41.322866 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:13:41.352156 1600247 cli_runner.go:164] Run: docker exec addons-142606 stat /var/lib/dpkg/alternatives/iptables
	I1216 06:13:41.410114 1600247 oci.go:144] the created container "addons-142606" has a running status.
	I1216 06:13:41.410140 1600247 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa...
	I1216 06:13:41.694055 1600247 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 06:13:41.715336 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:13:41.744187 1600247 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 06:13:41.744208 1600247 kic_runner.go:114] Args: [docker exec --privileged addons-142606 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 06:13:41.811615 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:13:41.837667 1600247 machine.go:94] provisionDockerMachine start ...
	I1216 06:13:41.837769 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:13:41.859406 1600247 main.go:143] libmachine: Using SSH client type: native
	I1216 06:13:41.859744 1600247 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34245 <nil> <nil>}
	I1216 06:13:41.859753 1600247 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 06:13:41.860383 1600247 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1216 06:13:44.991981 1600247 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-142606
	
	I1216 06:13:44.992006 1600247 ubuntu.go:182] provisioning hostname "addons-142606"
	I1216 06:13:44.992092 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:13:45.038191 1600247 main.go:143] libmachine: Using SSH client type: native
	I1216 06:13:45.038533 1600247 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34245 <nil> <nil>}
	I1216 06:13:45.038545 1600247 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-142606 && echo "addons-142606" | sudo tee /etc/hostname
	I1216 06:13:45.248125 1600247 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-142606
	
	I1216 06:13:45.248306 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:13:45.278587 1600247 main.go:143] libmachine: Using SSH client type: native
	I1216 06:13:45.278931 1600247 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34245 <nil> <nil>}
	I1216 06:13:45.278957 1600247 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-142606' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-142606/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-142606' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 06:13:45.420777 1600247 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 06:13:45.420807 1600247 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-1596013/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-1596013/.minikube}
	I1216 06:13:45.420834 1600247 ubuntu.go:190] setting up certificates
	I1216 06:13:45.420851 1600247 provision.go:84] configureAuth start
	I1216 06:13:45.420924 1600247 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-142606
	I1216 06:13:45.437479 1600247 provision.go:143] copyHostCerts
	I1216 06:13:45.437564 1600247 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem (1078 bytes)
	I1216 06:13:45.437704 1600247 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem (1123 bytes)
	I1216 06:13:45.437780 1600247 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem (1675 bytes)
	I1216 06:13:45.437848 1600247 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem org=jenkins.addons-142606 san=[127.0.0.1 192.168.49.2 addons-142606 localhost minikube]
	I1216 06:13:45.597072 1600247 provision.go:177] copyRemoteCerts
	I1216 06:13:45.597146 1600247 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 06:13:45.597191 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:13:45.614392 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:13:45.708520 1600247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 06:13:45.727452 1600247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1216 06:13:45.744959 1600247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 06:13:45.762424 1600247 provision.go:87] duration metric: took 341.544865ms to configureAuth
	I1216 06:13:45.762455 1600247 ubuntu.go:206] setting minikube options for container-runtime
	I1216 06:13:45.762648 1600247 config.go:182] Loaded profile config "addons-142606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 06:13:45.762755 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:13:45.780261 1600247 main.go:143] libmachine: Using SSH client type: native
	I1216 06:13:45.780651 1600247 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34245 <nil> <nil>}
	I1216 06:13:45.780676 1600247 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 06:13:46.055816 1600247 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 06:13:46.055837 1600247 machine.go:97] duration metric: took 4.218150695s to provisionDockerMachine
	I1216 06:13:46.055849 1600247 client.go:176] duration metric: took 11.427326544s to LocalClient.Create
	I1216 06:13:46.055863 1600247 start.go:167] duration metric: took 11.427410106s to libmachine.API.Create "addons-142606"
	I1216 06:13:46.055870 1600247 start.go:293] postStartSetup for "addons-142606" (driver="docker")
	I1216 06:13:46.055891 1600247 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 06:13:46.056310 1600247 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 06:13:46.056375 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:13:46.077412 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:13:46.176974 1600247 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 06:13:46.180563 1600247 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 06:13:46.180594 1600247 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 06:13:46.180607 1600247 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/addons for local assets ...
	I1216 06:13:46.180679 1600247 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/files for local assets ...
	I1216 06:13:46.180708 1600247 start.go:296] duration metric: took 124.832333ms for postStartSetup
	I1216 06:13:46.181033 1600247 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-142606
	I1216 06:13:46.198232 1600247 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/config.json ...
	I1216 06:13:46.198525 1600247 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 06:13:46.198580 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:13:46.215880 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:13:46.309542 1600247 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 06:13:46.314242 1600247 start.go:128] duration metric: took 11.689505476s to createHost
	I1216 06:13:46.314267 1600247 start.go:83] releasing machines lock for "addons-142606", held for 11.689648559s
	I1216 06:13:46.314336 1600247 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-142606
	I1216 06:13:46.332111 1600247 ssh_runner.go:195] Run: cat /version.json
	I1216 06:13:46.332134 1600247 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 06:13:46.332167 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:13:46.332201 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:13:46.357911 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:13:46.358066 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:13:46.453003 1600247 ssh_runner.go:195] Run: systemctl --version
	I1216 06:13:46.542827 1600247 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 06:13:46.582376 1600247 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 06:13:46.586750 1600247 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 06:13:46.586827 1600247 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 06:13:46.615874 1600247 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1216 06:13:46.615946 1600247 start.go:496] detecting cgroup driver to use...
	I1216 06:13:46.615993 1600247 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:13:46.616069 1600247 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 06:13:46.633768 1600247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:13:46.646297 1600247 docker.go:218] disabling cri-docker service (if available) ...
	I1216 06:13:46.646359 1600247 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 06:13:46.664082 1600247 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 06:13:46.684314 1600247 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 06:13:46.809574 1600247 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 06:13:46.935484 1600247 docker.go:234] disabling docker service ...
	I1216 06:13:46.935553 1600247 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 06:13:46.956253 1600247 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 06:13:46.969621 1600247 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 06:13:47.096447 1600247 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 06:13:47.216231 1600247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 06:13:47.229666 1600247 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 06:13:47.243767 1600247 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 06:13:47.243887 1600247 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:13:47.253289 1600247 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 06:13:47.253390 1600247 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:13:47.262288 1600247 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:13:47.270876 1600247 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:13:47.279512 1600247 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 06:13:47.287824 1600247 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:13:47.297134 1600247 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:13:47.310460 1600247 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:13:47.319536 1600247 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 06:13:47.327407 1600247 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 06:13:47.334992 1600247 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:13:47.454748 1600247 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 06:13:47.639050 1600247 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 06:13:47.639153 1600247 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 06:13:47.643005 1600247 start.go:564] Will wait 60s for crictl version
	I1216 06:13:47.643074 1600247 ssh_runner.go:195] Run: which crictl
	I1216 06:13:47.646818 1600247 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 06:13:47.672433 1600247 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 06:13:47.672592 1600247 ssh_runner.go:195] Run: crio --version
	I1216 06:13:47.701586 1600247 ssh_runner.go:195] Run: crio --version
	I1216 06:13:47.733744 1600247 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 06:13:47.736558 1600247 cli_runner.go:164] Run: docker network inspect addons-142606 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 06:13:47.753055 1600247 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 06:13:47.756926 1600247 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 06:13:47.767108 1600247 kubeadm.go:884] updating cluster {Name:addons-142606 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-142606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 06:13:47.767234 1600247 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 06:13:47.767297 1600247 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 06:13:47.804154 1600247 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 06:13:47.804180 1600247 crio.go:433] Images already preloaded, skipping extraction
	I1216 06:13:47.804239 1600247 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 06:13:47.829727 1600247 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 06:13:47.829750 1600247 cache_images.go:86] Images are preloaded, skipping loading
	I1216 06:13:47.829758 1600247 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1216 06:13:47.829847 1600247 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-142606 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-142606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 06:13:47.829940 1600247 ssh_runner.go:195] Run: crio config
	I1216 06:13:47.891767 1600247 cni.go:84] Creating CNI manager for ""
	I1216 06:13:47.891842 1600247 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 06:13:47.891881 1600247 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 06:13:47.891938 1600247 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-142606 NodeName:addons-142606 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 06:13:47.892108 1600247 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-142606"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 06:13:47.892231 1600247 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 06:13:47.900130 1600247 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 06:13:47.900204 1600247 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 06:13:47.907870 1600247 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1216 06:13:47.923622 1600247 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 06:13:47.937631 1600247 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1216 06:13:47.950980 1600247 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 06:13:47.954774 1600247 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 06:13:47.964685 1600247 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:13:48.089451 1600247 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:13:48.106378 1600247 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606 for IP: 192.168.49.2
	I1216 06:13:48.106398 1600247 certs.go:195] generating shared ca certs ...
	I1216 06:13:48.106415 1600247 certs.go:227] acquiring lock for ca certs: {Name:mkbf72d2e438185e2867d262e148d82e5455cccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:13:48.106544 1600247 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key
	I1216 06:13:48.641897 1600247 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt ...
	I1216 06:13:48.641932 1600247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt: {Name:mkf46262e02ea2028a456580d90b50f2340dbb4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:13:48.642129 1600247 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key ...
	I1216 06:13:48.642142 1600247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key: {Name:mkc8a5e2655ac158b6734542ce846c672953403b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:13:48.642228 1600247 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key
	I1216 06:13:48.823105 1600247 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt ...
	I1216 06:13:48.823137 1600247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt: {Name:mkdd726e6e1143a3b07e9bd935c2a97714506c12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:13:48.823303 1600247 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key ...
	I1216 06:13:48.823322 1600247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key: {Name:mk0df7b0c6d0f510dade0ec4ce39add2134f0c38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:13:48.823410 1600247 certs.go:257] generating profile certs ...
	I1216 06:13:48.823468 1600247 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.key
	I1216 06:13:48.823485 1600247 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt with IP's: []
	I1216 06:13:48.862234 1600247 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt ...
	I1216 06:13:48.862276 1600247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt: {Name:mk89b5e5cac16d069a5128404c05bea70625da4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:13:48.862443 1600247 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.key ...
	I1216 06:13:48.862457 1600247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.key: {Name:mk074b38d08c2a224c17463efdb2bafa16ad65a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:13:48.862542 1600247 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/apiserver.key.4c3e25f6
	I1216 06:13:48.862560 1600247 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/apiserver.crt.4c3e25f6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1216 06:13:49.223578 1600247 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/apiserver.crt.4c3e25f6 ...
	I1216 06:13:49.223610 1600247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/apiserver.crt.4c3e25f6: {Name:mk891d97de47c0a7b810a8597cbbf7ed57b5d12a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:13:49.223794 1600247 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/apiserver.key.4c3e25f6 ...
	I1216 06:13:49.223809 1600247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/apiserver.key.4c3e25f6: {Name:mkbaf2da6fd5895ff4a1607b98115c6179c9bc5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:13:49.223893 1600247 certs.go:382] copying /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/apiserver.crt.4c3e25f6 -> /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/apiserver.crt
	I1216 06:13:49.223974 1600247 certs.go:386] copying /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/apiserver.key.4c3e25f6 -> /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/apiserver.key
	I1216 06:13:49.224032 1600247 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/proxy-client.key
	I1216 06:13:49.224054 1600247 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/proxy-client.crt with IP's: []
	I1216 06:13:49.515298 1600247 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/proxy-client.crt ...
	I1216 06:13:49.515334 1600247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/proxy-client.crt: {Name:mk9b7df4149406e4d3144a3c55b374da4eaa475f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:13:49.515506 1600247 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/proxy-client.key ...
	I1216 06:13:49.515524 1600247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/proxy-client.key: {Name:mk33c984c867d87faf2e534025dadf476be4340e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:13:49.515703 1600247 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 06:13:49.515749 1600247 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem (1078 bytes)
	I1216 06:13:49.515780 1600247 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem (1123 bytes)
	I1216 06:13:49.515815 1600247 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem (1675 bytes)
	I1216 06:13:49.516396 1600247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 06:13:49.535271 1600247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 06:13:49.553393 1600247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 06:13:49.573721 1600247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 06:13:49.591294 1600247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 06:13:49.609259 1600247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 06:13:49.626500 1600247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 06:13:49.644089 1600247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 06:13:49.662625 1600247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 06:13:49.680051 1600247 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 06:13:49.692551 1600247 ssh_runner.go:195] Run: openssl version
	I1216 06:13:49.698980 1600247 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:13:49.706869 1600247 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 06:13:49.714910 1600247 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:13:49.719790 1600247 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:13:49.719856 1600247 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:13:49.762272 1600247 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 06:13:49.769716 1600247 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 06:13:49.777000 1600247 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 06:13:49.780433 1600247 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 06:13:49.780557 1600247 kubeadm.go:401] StartCluster: {Name:addons-142606 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-142606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:13:49.780638 1600247 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 06:13:49.780700 1600247 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 06:13:49.807769 1600247 cri.go:89] found id: ""
	I1216 06:13:49.807864 1600247 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 06:13:49.816131 1600247 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 06:13:49.824399 1600247 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 06:13:49.824486 1600247 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 06:13:49.832619 1600247 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 06:13:49.832638 1600247 kubeadm.go:158] found existing configuration files:
	
	I1216 06:13:49.832714 1600247 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 06:13:49.840939 1600247 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 06:13:49.841006 1600247 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 06:13:49.848609 1600247 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 06:13:49.856583 1600247 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 06:13:49.856680 1600247 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 06:13:49.864291 1600247 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 06:13:49.872192 1600247 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 06:13:49.872290 1600247 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 06:13:49.879912 1600247 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 06:13:49.887557 1600247 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 06:13:49.887674 1600247 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 06:13:49.895293 1600247 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 06:13:49.934678 1600247 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 06:13:49.935046 1600247 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:13:49.956488 1600247 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 06:13:49.956561 1600247 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 06:13:49.956596 1600247 kubeadm.go:319] OS: Linux
	I1216 06:13:49.956644 1600247 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 06:13:49.956697 1600247 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 06:13:49.956747 1600247 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 06:13:49.956797 1600247 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 06:13:49.956846 1600247 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 06:13:49.956895 1600247 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 06:13:49.956942 1600247 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 06:13:49.956991 1600247 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 06:13:49.957038 1600247 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 06:13:50.027355 1600247 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:13:50.027469 1600247 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:13:50.027561 1600247 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:13:50.039891 1600247 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:13:50.046635 1600247 out.go:252]   - Generating certificates and keys ...
	I1216 06:13:50.046807 1600247 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:13:50.046925 1600247 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:13:50.553441 1600247 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 06:13:51.593573 1600247 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 06:13:52.168632 1600247 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 06:13:52.572038 1600247 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 06:13:52.823274 1600247 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 06:13:52.823646 1600247 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-142606 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1216 06:13:53.693194 1600247 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 06:13:53.693532 1600247 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-142606 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1216 06:13:53.911743 1600247 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 06:13:54.464917 1600247 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 06:13:55.050299 1600247 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 06:13:55.050584 1600247 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:13:55.684882 1600247 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:13:56.120383 1600247 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:13:56.602480 1600247 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:13:56.723968 1600247 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:13:58.113130 1600247 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:13:58.114016 1600247 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:13:58.116868 1600247 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:13:58.120366 1600247 out.go:252]   - Booting up control plane ...
	I1216 06:13:58.120496 1600247 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:13:58.120578 1600247 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:13:58.120647 1600247 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:13:58.137090 1600247 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:13:58.137439 1600247 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:13:58.146799 1600247 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:13:58.146906 1600247 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:13:58.146946 1600247 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:13:58.280630 1600247 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:13:58.280745 1600247 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:13:59.281274 1600247 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000916313s
	I1216 06:13:59.285063 1600247 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 06:13:59.285159 1600247 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1216 06:13:59.285248 1600247 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 06:13:59.285326 1600247 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 06:14:01.964908 1600247 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.679192202s
	I1216 06:14:04.877378 1600247 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.592266636s
	I1216 06:14:05.287209 1600247 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001908453s
	I1216 06:14:05.320537 1600247 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 06:14:05.341173 1600247 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 06:14:05.365290 1600247 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 06:14:05.365729 1600247 kubeadm.go:319] [mark-control-plane] Marking the node addons-142606 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 06:14:05.379530 1600247 kubeadm.go:319] [bootstrap-token] Using token: zj5b5t.39n0uh0y5cilprjm
	I1216 06:14:05.384932 1600247 out.go:252]   - Configuring RBAC rules ...
	I1216 06:14:05.385080 1600247 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 06:14:05.387990 1600247 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 06:14:05.398678 1600247 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 06:14:05.403199 1600247 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 06:14:05.408880 1600247 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 06:14:05.413243 1600247 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 06:14:05.695428 1600247 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 06:14:06.124144 1600247 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 06:14:06.694779 1600247 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 06:14:06.696264 1600247 kubeadm.go:319] 
	I1216 06:14:06.696339 1600247 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 06:14:06.696345 1600247 kubeadm.go:319] 
	I1216 06:14:06.696422 1600247 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 06:14:06.696427 1600247 kubeadm.go:319] 
	I1216 06:14:06.696452 1600247 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 06:14:06.696561 1600247 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 06:14:06.696614 1600247 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 06:14:06.696619 1600247 kubeadm.go:319] 
	I1216 06:14:06.696673 1600247 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 06:14:06.696676 1600247 kubeadm.go:319] 
	I1216 06:14:06.696724 1600247 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 06:14:06.696728 1600247 kubeadm.go:319] 
	I1216 06:14:06.696780 1600247 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 06:14:06.696855 1600247 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 06:14:06.696924 1600247 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 06:14:06.696930 1600247 kubeadm.go:319] 
	I1216 06:14:06.697014 1600247 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 06:14:06.697091 1600247 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 06:14:06.697095 1600247 kubeadm.go:319] 
	I1216 06:14:06.697186 1600247 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token zj5b5t.39n0uh0y5cilprjm \
	I1216 06:14:06.697291 1600247 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:98b5016a2f19357bbe076308b3bd53072319152b21d9550fc4ffc6d799a06c05 \
	I1216 06:14:06.697311 1600247 kubeadm.go:319] 	--control-plane 
	I1216 06:14:06.697315 1600247 kubeadm.go:319] 
	I1216 06:14:06.697399 1600247 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 06:14:06.697403 1600247 kubeadm.go:319] 
	I1216 06:14:06.697485 1600247 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token zj5b5t.39n0uh0y5cilprjm \
	I1216 06:14:06.697587 1600247 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:98b5016a2f19357bbe076308b3bd53072319152b21d9550fc4ffc6d799a06c05 
	I1216 06:14:06.700301 1600247 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1216 06:14:06.700578 1600247 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 06:14:06.700684 1600247 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 06:14:06.700734 1600247 cni.go:84] Creating CNI manager for ""
	I1216 06:14:06.700748 1600247 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 06:14:06.703777 1600247 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1216 06:14:06.706713 1600247 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1216 06:14:06.710862 1600247 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1216 06:14:06.710884 1600247 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1216 06:14:06.724277 1600247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 06:14:07.030599 1600247 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 06:14:07.030732 1600247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:14:07.030830 1600247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-142606 minikube.k8s.io/updated_at=2025_12_16T06_14_07_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f minikube.k8s.io/name=addons-142606 minikube.k8s.io/primary=true
	I1216 06:14:07.169460 1600247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:14:07.169526 1600247 ops.go:34] apiserver oom_adj: -16
	I1216 06:14:07.670572 1600247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:14:08.170524 1600247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:14:08.670340 1600247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:14:09.169589 1600247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:14:09.670377 1600247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:14:10.169579 1600247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:14:10.670020 1600247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:14:10.761367 1600247 kubeadm.go:1114] duration metric: took 3.730678413s to wait for elevateKubeSystemPrivileges
	I1216 06:14:10.761396 1600247 kubeadm.go:403] duration metric: took 20.980843269s to StartCluster
	I1216 06:14:10.761414 1600247 settings.go:142] acquiring lock: {Name:mk011eec7aa10b3db81dce3dc7edf51f985e2ce2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:14:10.761548 1600247 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:14:10.761949 1600247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/kubeconfig: {Name:mk61a8e87d869d27c5acc78145bae6b02a8088a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:14:10.762137 1600247 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 06:14:10.762162 1600247 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 06:14:10.762395 1600247 config.go:182] Loaded profile config "addons-142606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 06:14:10.762435 1600247 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1216 06:14:10.762527 1600247 addons.go:70] Setting yakd=true in profile "addons-142606"
	I1216 06:14:10.762547 1600247 addons.go:239] Setting addon yakd=true in "addons-142606"
	I1216 06:14:10.762570 1600247 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:14:10.763040 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:14:10.763186 1600247 addons.go:70] Setting inspektor-gadget=true in profile "addons-142606"
	I1216 06:14:10.763202 1600247 addons.go:239] Setting addon inspektor-gadget=true in "addons-142606"
	I1216 06:14:10.763220 1600247 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:14:10.763609 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:14:10.763915 1600247 addons.go:70] Setting metrics-server=true in profile "addons-142606"
	I1216 06:14:10.763934 1600247 addons.go:239] Setting addon metrics-server=true in "addons-142606"
	I1216 06:14:10.763957 1600247 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:14:10.764370 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:14:10.768301 1600247 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-142606"
	I1216 06:14:10.768374 1600247 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-142606"
	I1216 06:14:10.768486 1600247 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:14:10.768802 1600247 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-142606"
	I1216 06:14:10.768818 1600247 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-142606"
	I1216 06:14:10.768859 1600247 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:14:10.769419 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:14:10.770026 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:14:10.770619 1600247 addons.go:70] Setting cloud-spanner=true in profile "addons-142606"
	I1216 06:14:10.770642 1600247 addons.go:239] Setting addon cloud-spanner=true in "addons-142606"
	I1216 06:14:10.770670 1600247 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:14:10.771099 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:14:10.775657 1600247 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-142606"
	I1216 06:14:10.775731 1600247 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-142606"
	I1216 06:14:10.775760 1600247 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:14:10.776244 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:14:10.783877 1600247 addons.go:70] Setting default-storageclass=true in profile "addons-142606"
	I1216 06:14:10.783921 1600247 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-142606"
	I1216 06:14:10.783970 1600247 addons.go:70] Setting registry=true in profile "addons-142606"
	I1216 06:14:10.784033 1600247 addons.go:239] Setting addon registry=true in "addons-142606"
	I1216 06:14:10.784192 1600247 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:14:10.784279 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:14:10.785928 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:14:10.800378 1600247 addons.go:70] Setting gcp-auth=true in profile "addons-142606"
	I1216 06:14:10.800422 1600247 mustload.go:66] Loading cluster: addons-142606
	I1216 06:14:10.800647 1600247 config.go:182] Loaded profile config "addons-142606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 06:14:10.800923 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:14:10.810307 1600247 addons.go:70] Setting registry-creds=true in profile "addons-142606"
	I1216 06:14:10.810335 1600247 addons.go:239] Setting addon registry-creds=true in "addons-142606"
	I1216 06:14:10.810371 1600247 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:14:10.810853 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:14:10.820107 1600247 addons.go:70] Setting ingress=true in profile "addons-142606"
	I1216 06:14:10.820153 1600247 addons.go:239] Setting addon ingress=true in "addons-142606"
	I1216 06:14:10.820202 1600247 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:14:10.820866 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:14:10.829935 1600247 addons.go:70] Setting storage-provisioner=true in profile "addons-142606"
	I1216 06:14:10.830092 1600247 addons.go:239] Setting addon storage-provisioner=true in "addons-142606"
	I1216 06:14:10.830151 1600247 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:14:10.831530 1600247 addons.go:70] Setting ingress-dns=true in profile "addons-142606"
	I1216 06:14:10.831559 1600247 addons.go:239] Setting addon ingress-dns=true in "addons-142606"
	I1216 06:14:10.831595 1600247 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:14:10.832040 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:14:10.836134 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:14:10.847774 1600247 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-142606"
	I1216 06:14:10.847853 1600247 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-142606"
	I1216 06:14:10.848045 1600247 out.go:179] * Verifying Kubernetes components...
	I1216 06:14:10.848323 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:14:10.851278 1600247 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:14:10.872675 1600247 addons.go:70] Setting volcano=true in profile "addons-142606"
	I1216 06:14:10.872752 1600247 addons.go:239] Setting addon volcano=true in "addons-142606"
	I1216 06:14:10.872805 1600247 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:14:10.873318 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:14:10.892690 1600247 addons.go:70] Setting volumesnapshots=true in profile "addons-142606"
	I1216 06:14:10.892765 1600247 addons.go:239] Setting addon volumesnapshots=true in "addons-142606"
	I1216 06:14:10.892820 1600247 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:14:10.893331 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:14:10.913344 1600247 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1216 06:14:10.921288 1600247 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1216 06:14:10.921317 1600247 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1216 06:14:10.921391 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:14:10.943796 1600247 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1216 06:14:10.945025 1600247 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1216 06:14:10.981097 1600247 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1216 06:14:11.033906 1600247 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1216 06:14:11.034087 1600247 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1216 06:14:11.002319 1600247 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1216 06:14:11.041280 1600247 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1216 06:14:11.041383 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:14:11.041602 1600247 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1216 06:14:11.041614 1600247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1216 06:14:11.041667 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:14:11.054919 1600247 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1216 06:14:11.055000 1600247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1216 06:14:11.055131 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:14:11.003913 1600247 addons.go:239] Setting addon default-storageclass=true in "addons-142606"
	I1216 06:14:11.059316 1600247 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:14:11.059843 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:14:11.081450 1600247 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1216 06:14:11.085776 1600247 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1216 06:14:11.085863 1600247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1216 06:14:11.085965 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:14:11.003978 1600247 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1216 06:14:11.102938 1600247 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1216 06:14:11.102964 1600247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1216 06:14:11.103050 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:14:11.103458 1600247 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-142606"
	I1216 06:14:11.103498 1600247 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:14:11.103944 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:14:11.112014 1600247 host.go:66] Checking if "addons-142606" exists ...
	W1216 06:14:11.114437 1600247 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1216 06:14:11.114791 1600247 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1216 06:14:11.114804 1600247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1216 06:14:11.114859 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:14:11.122635 1600247 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1216 06:14:11.122799 1600247 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1216 06:14:11.122855 1600247 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1216 06:14:11.134137 1600247 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1216 06:14:11.137030 1600247 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1216 06:14:11.143606 1600247 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1216 06:14:11.146664 1600247 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1216 06:14:11.149667 1600247 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1216 06:14:11.149695 1600247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1216 06:14:11.149769 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:14:11.123518 1600247 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 06:14:11.123524 1600247 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1216 06:14:11.130550 1600247 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1216 06:14:11.171485 1600247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1216 06:14:11.171575 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:14:11.183864 1600247 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1216 06:14:11.184139 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:14:11.184983 1600247 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1216 06:14:11.185554 1600247 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:14:11.185577 1600247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 06:14:11.185640 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:14:11.209415 1600247 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1216 06:14:11.209528 1600247 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1216 06:14:11.209705 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:14:11.229773 1600247 out.go:179]   - Using image docker.io/registry:3.0.0
	I1216 06:14:11.236610 1600247 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1216 06:14:11.238546 1600247 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1216 06:14:11.238568 1600247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1216 06:14:11.238633 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:14:11.246052 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:14:11.247346 1600247 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1216 06:14:11.252602 1600247 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1216 06:14:11.257483 1600247 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1216 06:14:11.257526 1600247 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1216 06:14:11.257607 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:14:11.292759 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:14:11.323640 1600247 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 06:14:11.323661 1600247 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 06:14:11.323722 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:14:11.356873 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:14:11.360613 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:14:11.361666 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:14:11.381845 1600247 out.go:179]   - Using image docker.io/busybox:stable
	I1216 06:14:11.382015 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:14:11.391966 1600247 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1216 06:14:11.392628 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:14:11.400404 1600247 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1216 06:14:11.400429 1600247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1216 06:14:11.400613 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:14:11.418750 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:14:11.445398 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:14:11.468520 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:14:11.469282 1600247 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:14:11.469581 1600247 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 06:14:11.488774 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:14:11.488818 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:14:11.491674 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:14:11.498995 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	W1216 06:14:11.500228 1600247 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1216 06:14:11.500258 1600247 retry.go:31] will retry after 339.752207ms: ssh: handshake failed: EOF
	I1216 06:14:11.717729 1600247 node_ready.go:35] waiting up to 6m0s for node "addons-142606" to be "Ready" ...
	I1216 06:14:11.721299 1600247 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1216 06:14:11.721374 1600247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1216 06:14:11.913775 1600247 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1216 06:14:11.913857 1600247 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1216 06:14:11.971101 1600247 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1216 06:14:11.971124 1600247 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1216 06:14:11.972260 1600247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1216 06:14:12.086634 1600247 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 06:14:12.086657 1600247 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1216 06:14:12.184589 1600247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 06:14:12.186852 1600247 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1216 06:14:12.186916 1600247 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1216 06:14:12.202969 1600247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1216 06:14:12.260848 1600247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1216 06:14:12.270739 1600247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:14:12.292636 1600247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1216 06:14:12.293397 1600247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1216 06:14:12.329882 1600247 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1216 06:14:12.329908 1600247 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1216 06:14:12.341706 1600247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1216 06:14:12.380841 1600247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1216 06:14:12.390499 1600247 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1216 06:14:12.390521 1600247 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1216 06:14:12.402160 1600247 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1216 06:14:12.402185 1600247 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1216 06:14:12.418574 1600247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1216 06:14:12.450876 1600247 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1216 06:14:12.450903 1600247 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1216 06:14:12.512900 1600247 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1216 06:14:12.512927 1600247 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1216 06:14:12.630953 1600247 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1216 06:14:12.630978 1600247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1216 06:14:12.666324 1600247 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1216 06:14:12.666350 1600247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1216 06:14:12.667346 1600247 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1216 06:14:12.667371 1600247 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1216 06:14:12.684817 1600247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:14:12.695073 1600247 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1216 06:14:12.695142 1600247 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1216 06:14:12.738986 1600247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1216 06:14:12.939944 1600247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1216 06:14:12.969488 1600247 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1216 06:14:12.969510 1600247 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1216 06:14:12.994138 1600247 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1216 06:14:12.994160 1600247 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1216 06:14:13.124038 1600247 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1216 06:14:13.124107 1600247 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1216 06:14:13.163228 1600247 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1216 06:14:13.163296 1600247 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1216 06:14:13.361572 1600247 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 06:14:13.361641 1600247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1216 06:14:13.400285 1600247 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1216 06:14:13.400354 1600247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1216 06:14:13.585335 1600247 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1216 06:14:13.585411 1600247 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1216 06:14:13.588014 1600247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1216 06:14:13.753297 1600247 node_ready.go:57] node "addons-142606" has "Ready":"False" status (will retry)
	I1216 06:14:13.773467 1600247 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.303859159s)
	I1216 06:14:13.773547 1600247 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1216 06:14:13.773671 1600247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.801386597s)
	I1216 06:14:13.838945 1600247 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1216 06:14:13.839016 1600247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1216 06:14:14.159182 1600247 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1216 06:14:14.159255 1600247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1216 06:14:14.278097 1600247 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-142606" context rescaled to 1 replicas
	I1216 06:14:14.463904 1600247 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1216 06:14:14.463925 1600247 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1216 06:14:14.605637 1600247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1216 06:14:15.761631 1600247 node_ready.go:57] node "addons-142606" has "Ready":"False" status (will retry)
	I1216 06:14:16.433351 1600247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.230292065s)
	I1216 06:14:16.433449 1600247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.172577488s)
	I1216 06:14:16.433506 1600247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.162743054s)
	I1216 06:14:16.433549 1600247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.140132969s)
	I1216 06:14:16.433772 1600247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.141111573s)
	I1216 06:14:16.433820 1600247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.09209193s)
	I1216 06:14:16.433855 1600247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.05299121s)
	I1216 06:14:16.433976 1600247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.2492991s)
	I1216 06:14:16.434004 1600247 addons.go:495] Verifying addon metrics-server=true in "addons-142606"
	I1216 06:14:17.222321 1600247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.803694511s)
	I1216 06:14:17.222663 1600247 addons.go:495] Verifying addon ingress=true in "addons-142606"
	I1216 06:14:17.222431 1600247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.537538911s)
	I1216 06:14:17.222465 1600247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.483408831s)
	I1216 06:14:17.222488 1600247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.282475719s)
	I1216 06:14:17.223237 1600247 addons.go:495] Verifying addon registry=true in "addons-142606"
	I1216 06:14:17.222556 1600247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.634476144s)
	W1216 06:14:17.223455 1600247 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1216 06:14:17.223471 1600247 retry.go:31] will retry after 268.785662ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1216 06:14:17.226426 1600247 out.go:179] * Verifying registry addon...
	I1216 06:14:17.226433 1600247 out.go:179] * Verifying ingress addon...
	I1216 06:14:17.226591 1600247 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-142606 service yakd-dashboard -n yakd-dashboard
	
	I1216 06:14:17.231010 1600247 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1216 06:14:17.231010 1600247 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1216 06:14:17.238244 1600247 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1216 06:14:17.238415 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:17.238560 1600247 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1216 06:14:17.238590 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:17.493258 1600247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 06:14:17.546926 1600247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.941231546s)
	I1216 06:14:17.546966 1600247 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-142606"
	I1216 06:14:17.549944 1600247 out.go:179] * Verifying csi-hostpath-driver addon...
	I1216 06:14:17.554364 1600247 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1216 06:14:17.567454 1600247 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1216 06:14:17.567522 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:17.737282 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:17.737605 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:18.058570 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 06:14:18.220810 1600247 node_ready.go:57] node "addons-142606" has "Ready":"False" status (will retry)
	I1216 06:14:18.235182 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:18.235497 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:18.558829 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:18.722808 1600247 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1216 06:14:18.722964 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:14:18.735012 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:18.735087 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:18.745764 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:14:18.873604 1600247 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1216 06:14:18.886315 1600247 addons.go:239] Setting addon gcp-auth=true in "addons-142606"
	I1216 06:14:18.886363 1600247 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:14:18.886829 1600247 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:14:18.903740 1600247 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1216 06:14:18.903795 1600247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:14:18.920632 1600247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:14:19.057643 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:19.234858 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:19.235074 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:19.558396 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:19.734717 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:19.734891 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:20.058618 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 06:14:20.222930 1600247 node_ready.go:57] node "addons-142606" has "Ready":"False" status (will retry)
	I1216 06:14:20.236940 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:20.237182 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:20.244520 1600247 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.340744757s)
	I1216 06:14:20.244742 1600247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.751399321s)
	I1216 06:14:20.247616 1600247 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1216 06:14:20.250390 1600247 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1216 06:14:20.253333 1600247 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1216 06:14:20.253359 1600247 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1216 06:14:20.266659 1600247 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1216 06:14:20.266722 1600247 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1216 06:14:20.280988 1600247 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1216 06:14:20.281013 1600247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1216 06:14:20.294426 1600247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1216 06:14:20.558302 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:20.736375 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:20.737627 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:20.830885 1600247 addons.go:495] Verifying addon gcp-auth=true in "addons-142606"
	I1216 06:14:20.833977 1600247 out.go:179] * Verifying gcp-auth addon...
	I1216 06:14:20.836808 1600247 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1216 06:14:20.845517 1600247 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1216 06:14:20.845582 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:21.058091 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:21.234447 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:21.234828 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:21.340534 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:21.558305 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:21.734523 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:21.734917 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:21.839692 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:22.057730 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:22.235290 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:22.235555 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:22.340340 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:22.557824 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 06:14:22.720850 1600247 node_ready.go:57] node "addons-142606" has "Ready":"False" status (will retry)
	I1216 06:14:22.735489 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:22.735899 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:22.839998 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:23.057939 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:23.234983 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:23.235221 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:23.339798 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:23.558228 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:23.735804 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:23.735855 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:23.840234 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:24.057483 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:24.235058 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:24.235164 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:24.340359 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:24.558004 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 06:14:24.721015 1600247 node_ready.go:57] node "addons-142606" has "Ready":"False" status (will retry)
	I1216 06:14:24.734068 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:24.734195 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:24.840066 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:25.057957 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:25.235012 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:25.235198 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:25.340045 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:25.557880 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:25.734697 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:25.734815 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:25.839616 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:26.057858 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:26.235916 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:26.236609 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:26.340506 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:26.557988 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 06:14:26.721097 1600247 node_ready.go:57] node "addons-142606" has "Ready":"False" status (will retry)
	I1216 06:14:26.734410 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:26.734512 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:26.840289 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:27.057310 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:27.234397 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:27.234517 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:27.340551 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:27.557600 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:27.734339 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:27.734451 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:27.841314 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:28.058078 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:28.234653 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:28.234940 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:28.340290 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:28.557238 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:28.734813 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:28.735243 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:28.839970 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:29.057870 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 06:14:29.220358 1600247 node_ready.go:57] node "addons-142606" has "Ready":"False" status (will retry)
	I1216 06:14:29.234110 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:29.234650 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:29.340148 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:29.557006 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:29.734282 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:29.734469 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:29.840454 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:30.073213 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:30.235513 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:30.235708 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:30.340495 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:30.557583 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:30.735183 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:30.735281 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:30.840255 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:31.057535 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 06:14:31.221431 1600247 node_ready.go:57] node "addons-142606" has "Ready":"False" status (will retry)
	I1216 06:14:31.234753 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:31.234956 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:31.339828 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:31.557757 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:31.735117 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:31.735515 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:31.840538 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:32.057409 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:32.234832 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:32.234852 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:32.340736 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:32.557438 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:32.734569 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:32.736396 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:32.840718 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:33.057713 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:33.234680 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:33.235118 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:33.339989 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:33.558524 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 06:14:33.721252 1600247 node_ready.go:57] node "addons-142606" has "Ready":"False" status (will retry)
	I1216 06:14:33.734453 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:33.735048 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:33.839959 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:34.058124 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:34.234863 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:34.235687 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:34.339314 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:34.557393 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:34.734534 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:34.734675 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:34.840750 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:35.058407 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:35.235839 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:35.236096 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:35.340062 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:35.557877 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:35.733979 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:35.734127 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:35.839930 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:36.057915 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 06:14:36.220802 1600247 node_ready.go:57] node "addons-142606" has "Ready":"False" status (will retry)
	I1216 06:14:36.242562 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:36.242795 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:36.340073 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:36.558326 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:36.734754 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:36.735065 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:36.839947 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:37.058120 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:37.235900 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:37.236265 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:37.340028 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:37.558543 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:37.734928 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:37.735293 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:37.840093 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:38.058954 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:38.234687 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:38.235316 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:38.340424 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:38.557338 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 06:14:38.721410 1600247 node_ready.go:57] node "addons-142606" has "Ready":"False" status (will retry)
	I1216 06:14:38.734308 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:38.734796 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:38.840586 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:39.057898 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:39.234910 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:39.235202 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:39.340353 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:39.557232 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:39.734303 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:39.734563 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:39.840452 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:40.057920 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:40.235528 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:40.235742 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:40.342221 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:40.557205 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:40.734854 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:40.735232 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:40.840222 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:41.057741 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 06:14:41.221587 1600247 node_ready.go:57] node "addons-142606" has "Ready":"False" status (will retry)
	I1216 06:14:41.235115 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:41.235684 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:41.340604 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:41.558020 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:41.735895 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:41.736022 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:41.840185 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:42.058135 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:42.234903 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:42.235028 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:42.339952 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:42.558594 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:42.734491 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:42.734739 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:42.840604 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:43.058029 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:43.234597 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:43.234696 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:43.340607 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:43.560287 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 06:14:43.721099 1600247 node_ready.go:57] node "addons-142606" has "Ready":"False" status (will retry)
	I1216 06:14:43.734244 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:43.734582 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:43.840450 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:44.058236 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:44.235062 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:44.235425 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:44.340380 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:44.557746 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:44.735582 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:44.736076 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:44.839822 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:45.063774 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:45.241664 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:45.243711 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:45.340053 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:45.557906 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 06:14:45.721532 1600247 node_ready.go:57] node "addons-142606" has "Ready":"False" status (will retry)
	I1216 06:14:45.734614 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:45.734736 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:45.840706 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:46.057939 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:46.233948 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:46.235141 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:46.340124 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:46.558603 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:46.734736 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:46.734935 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:46.839985 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:47.058425 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:47.234500 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:47.234630 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:47.340712 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:47.558090 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 06:14:47.721911 1600247 node_ready.go:57] node "addons-142606" has "Ready":"False" status (will retry)
	I1216 06:14:47.736778 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:47.737247 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:47.840268 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:48.057457 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:48.236170 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:48.236316 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:48.339721 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:48.558042 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:48.734656 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:48.734802 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:48.839777 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:49.057591 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:49.234611 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:49.236604 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:49.340444 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:49.557301 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:49.735002 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:49.735326 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:49.840185 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:50.057398 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 06:14:50.221122 1600247 node_ready.go:57] node "addons-142606" has "Ready":"False" status (will retry)
	I1216 06:14:50.235573 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:50.235823 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:50.340556 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:50.557165 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:50.734011 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:50.734494 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:50.840319 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:51.058145 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:51.236568 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:51.236837 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:51.340578 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:51.557774 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:51.734859 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:51.735485 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:51.840312 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:52.057961 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:52.234531 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:52.234962 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:52.339762 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:52.557937 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:52.749258 1600247 node_ready.go:49] node "addons-142606" is "Ready"
	I1216 06:14:52.749286 1600247 node_ready.go:38] duration metric: took 41.031474138s for node "addons-142606" to be "Ready" ...
	I1216 06:14:52.749301 1600247 api_server.go:52] waiting for apiserver process to appear ...
	I1216 06:14:52.749359 1600247 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:14:52.768591 1600247 api_server.go:72] duration metric: took 42.006401302s to wait for apiserver process to appear ...
	I1216 06:14:52.768685 1600247 api_server.go:88] waiting for apiserver healthz status ...
	I1216 06:14:52.768720 1600247 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 06:14:52.797624 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:52.799061 1600247 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1216 06:14:52.799122 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:52.827602 1600247 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1216 06:14:52.843955 1600247 api_server.go:141] control plane version: v1.34.2
	I1216 06:14:52.844036 1600247 api_server.go:131] duration metric: took 75.329629ms to wait for apiserver health ...
	I1216 06:14:52.844061 1600247 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 06:14:52.865849 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:52.867010 1600247 system_pods.go:59] 19 kube-system pods found
	I1216 06:14:52.867090 1600247 system_pods.go:61] "coredns-66bc5c9577-hzh7x" [4532cfd3-202b-404f-98e9-88793a3557e5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:14:52.867114 1600247 system_pods.go:61] "csi-hostpath-attacher-0" [16a76d66-88e5-4dc9-9fe6-b62f1b24b2be] Pending
	I1216 06:14:52.867153 1600247 system_pods.go:61] "csi-hostpath-resizer-0" [25bae709-4f76-4545-8659-71e1ae91246f] Pending
	I1216 06:14:52.867178 1600247 system_pods.go:61] "csi-hostpathplugin-ds9r9" [d04f9e6e-41b7-4f6c-86f1-17c1472977bd] Pending
	I1216 06:14:52.867200 1600247 system_pods.go:61] "etcd-addons-142606" [ba28f010-7a70-408c-b883-ac04c6332042] Running
	I1216 06:14:52.867239 1600247 system_pods.go:61] "kindnet-t8fqq" [46de864f-3035-46c4-a475-f9d951f1d51c] Running
	I1216 06:14:52.867266 1600247 system_pods.go:61] "kube-apiserver-addons-142606" [40b526a7-7c2d-46af-99b6-30b1e314240c] Running
	I1216 06:14:52.867288 1600247 system_pods.go:61] "kube-controller-manager-addons-142606" [06aa3b93-c353-45ad-9f5d-41111af2811b] Running
	I1216 06:14:52.867327 1600247 system_pods.go:61] "kube-ingress-dns-minikube" [95e2cd7e-efb8-4796-a7df-017a5f674494] Pending
	I1216 06:14:52.867353 1600247 system_pods.go:61] "kube-proxy-g5n5p" [68410934-d306-4372-b4f9-6b768fa1f3a1] Running
	I1216 06:14:52.867373 1600247 system_pods.go:61] "kube-scheduler-addons-142606" [9239a891-6dc3-4a9a-bc11-ec5c1ad9d174] Running
	I1216 06:14:52.867420 1600247 system_pods.go:61] "metrics-server-85b7d694d7-t6mbs" [8d8eeda6-d813-458b-807c-e88c9a1a0462] Pending
	I1216 06:14:52.867442 1600247 system_pods.go:61] "nvidia-device-plugin-daemonset-w4pvk" [bbe25eae-b43b-4904-bd63-a070c971d855] Pending
	I1216 06:14:52.867462 1600247 system_pods.go:61] "registry-6b586f9694-prj95" [839dbbf2-5df3-4b37-903a-7afbee677045] Pending
	I1216 06:14:52.867502 1600247 system_pods.go:61] "registry-creds-764b6fb674-8vxwt" [42db279c-e8af-4665-a64c-91e4804c2b00] Pending
	I1216 06:14:52.867525 1600247 system_pods.go:61] "registry-proxy-qh7wq" [bc3adfb9-13a5-45c6-9047-8a229431de86] Pending
	I1216 06:14:52.867549 1600247 system_pods.go:61] "snapshot-controller-7d9fbc56b8-ljkmz" [eee045ce-7688-4414-b1bc-d826f5823500] Pending
	I1216 06:14:52.867584 1600247 system_pods.go:61] "snapshot-controller-7d9fbc56b8-ntp5r" [1c150a9c-ecf0-4b0a-915f-2c11086617cb] Pending
	I1216 06:14:52.867607 1600247 system_pods.go:61] "storage-provisioner" [f7eb6858-f3fe-49df-9b50-ed902bf3de6f] Pending
	I1216 06:14:52.867631 1600247 system_pods.go:74] duration metric: took 23.549064ms to wait for pod list to return data ...
	I1216 06:14:52.867666 1600247 default_sa.go:34] waiting for default service account to be created ...
	I1216 06:14:52.889075 1600247 default_sa.go:45] found service account: "default"
	I1216 06:14:52.889155 1600247 default_sa.go:55] duration metric: took 21.464603ms for default service account to be created ...
	I1216 06:14:52.889181 1600247 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 06:14:52.900518 1600247 system_pods.go:86] 19 kube-system pods found
	I1216 06:14:52.900603 1600247 system_pods.go:89] "coredns-66bc5c9577-hzh7x" [4532cfd3-202b-404f-98e9-88793a3557e5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:14:52.900625 1600247 system_pods.go:89] "csi-hostpath-attacher-0" [16a76d66-88e5-4dc9-9fe6-b62f1b24b2be] Pending
	I1216 06:14:52.900645 1600247 system_pods.go:89] "csi-hostpath-resizer-0" [25bae709-4f76-4545-8659-71e1ae91246f] Pending
	I1216 06:14:52.900677 1600247 system_pods.go:89] "csi-hostpathplugin-ds9r9" [d04f9e6e-41b7-4f6c-86f1-17c1472977bd] Pending
	I1216 06:14:52.900700 1600247 system_pods.go:89] "etcd-addons-142606" [ba28f010-7a70-408c-b883-ac04c6332042] Running
	I1216 06:14:52.900719 1600247 system_pods.go:89] "kindnet-t8fqq" [46de864f-3035-46c4-a475-f9d951f1d51c] Running
	I1216 06:14:52.900757 1600247 system_pods.go:89] "kube-apiserver-addons-142606" [40b526a7-7c2d-46af-99b6-30b1e314240c] Running
	I1216 06:14:52.900785 1600247 system_pods.go:89] "kube-controller-manager-addons-142606" [06aa3b93-c353-45ad-9f5d-41111af2811b] Running
	I1216 06:14:52.900812 1600247 system_pods.go:89] "kube-ingress-dns-minikube" [95e2cd7e-efb8-4796-a7df-017a5f674494] Pending
	I1216 06:14:52.900850 1600247 system_pods.go:89] "kube-proxy-g5n5p" [68410934-d306-4372-b4f9-6b768fa1f3a1] Running
	I1216 06:14:52.900874 1600247 system_pods.go:89] "kube-scheduler-addons-142606" [9239a891-6dc3-4a9a-bc11-ec5c1ad9d174] Running
	I1216 06:14:52.900898 1600247 system_pods.go:89] "metrics-server-85b7d694d7-t6mbs" [8d8eeda6-d813-458b-807c-e88c9a1a0462] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 06:14:52.900933 1600247 system_pods.go:89] "nvidia-device-plugin-daemonset-w4pvk" [bbe25eae-b43b-4904-bd63-a070c971d855] Pending
	I1216 06:14:52.900957 1600247 system_pods.go:89] "registry-6b586f9694-prj95" [839dbbf2-5df3-4b37-903a-7afbee677045] Pending
	I1216 06:14:52.900976 1600247 system_pods.go:89] "registry-creds-764b6fb674-8vxwt" [42db279c-e8af-4665-a64c-91e4804c2b00] Pending
	I1216 06:14:52.901013 1600247 system_pods.go:89] "registry-proxy-qh7wq" [bc3adfb9-13a5-45c6-9047-8a229431de86] Pending
	I1216 06:14:52.901050 1600247 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ljkmz" [eee045ce-7688-4414-b1bc-d826f5823500] Pending
	I1216 06:14:52.901070 1600247 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ntp5r" [1c150a9c-ecf0-4b0a-915f-2c11086617cb] Pending
	I1216 06:14:52.901105 1600247 system_pods.go:89] "storage-provisioner" [f7eb6858-f3fe-49df-9b50-ed902bf3de6f] Pending
	I1216 06:14:52.901146 1600247 retry.go:31] will retry after 218.919549ms: missing components: kube-dns
	I1216 06:14:53.125650 1600247 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1216 06:14:53.125724 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:53.138444 1600247 system_pods.go:86] 19 kube-system pods found
	I1216 06:14:53.138534 1600247 system_pods.go:89] "coredns-66bc5c9577-hzh7x" [4532cfd3-202b-404f-98e9-88793a3557e5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:14:53.138557 1600247 system_pods.go:89] "csi-hostpath-attacher-0" [16a76d66-88e5-4dc9-9fe6-b62f1b24b2be] Pending
	I1216 06:14:53.138600 1600247 system_pods.go:89] "csi-hostpath-resizer-0" [25bae709-4f76-4545-8659-71e1ae91246f] Pending
	I1216 06:14:53.138626 1600247 system_pods.go:89] "csi-hostpathplugin-ds9r9" [d04f9e6e-41b7-4f6c-86f1-17c1472977bd] Pending
	I1216 06:14:53.138648 1600247 system_pods.go:89] "etcd-addons-142606" [ba28f010-7a70-408c-b883-ac04c6332042] Running
	I1216 06:14:53.138687 1600247 system_pods.go:89] "kindnet-t8fqq" [46de864f-3035-46c4-a475-f9d951f1d51c] Running
	I1216 06:14:53.138712 1600247 system_pods.go:89] "kube-apiserver-addons-142606" [40b526a7-7c2d-46af-99b6-30b1e314240c] Running
	I1216 06:14:53.138737 1600247 system_pods.go:89] "kube-controller-manager-addons-142606" [06aa3b93-c353-45ad-9f5d-41111af2811b] Running
	I1216 06:14:53.138777 1600247 system_pods.go:89] "kube-ingress-dns-minikube" [95e2cd7e-efb8-4796-a7df-017a5f674494] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1216 06:14:53.138803 1600247 system_pods.go:89] "kube-proxy-g5n5p" [68410934-d306-4372-b4f9-6b768fa1f3a1] Running
	I1216 06:14:53.138825 1600247 system_pods.go:89] "kube-scheduler-addons-142606" [9239a891-6dc3-4a9a-bc11-ec5c1ad9d174] Running
	I1216 06:14:53.138862 1600247 system_pods.go:89] "metrics-server-85b7d694d7-t6mbs" [8d8eeda6-d813-458b-807c-e88c9a1a0462] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 06:14:53.138885 1600247 system_pods.go:89] "nvidia-device-plugin-daemonset-w4pvk" [bbe25eae-b43b-4904-bd63-a070c971d855] Pending
	I1216 06:14:53.138913 1600247 system_pods.go:89] "registry-6b586f9694-prj95" [839dbbf2-5df3-4b37-903a-7afbee677045] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 06:14:53.138952 1600247 system_pods.go:89] "registry-creds-764b6fb674-8vxwt" [42db279c-e8af-4665-a64c-91e4804c2b00] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1216 06:14:53.138978 1600247 system_pods.go:89] "registry-proxy-qh7wq" [bc3adfb9-13a5-45c6-9047-8a229431de86] Pending
	I1216 06:14:53.139001 1600247 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ljkmz" [eee045ce-7688-4414-b1bc-d826f5823500] Pending
	I1216 06:14:53.139035 1600247 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ntp5r" [1c150a9c-ecf0-4b0a-915f-2c11086617cb] Pending
	I1216 06:14:53.139060 1600247 system_pods.go:89] "storage-provisioner" [f7eb6858-f3fe-49df-9b50-ed902bf3de6f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:14:53.139094 1600247 retry.go:31] will retry after 309.512001ms: missing components: kube-dns
	I1216 06:14:53.266436 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:53.266582 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:53.359909 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:53.463214 1600247 system_pods.go:86] 19 kube-system pods found
	I1216 06:14:53.463250 1600247 system_pods.go:89] "coredns-66bc5c9577-hzh7x" [4532cfd3-202b-404f-98e9-88793a3557e5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:14:53.463258 1600247 system_pods.go:89] "csi-hostpath-attacher-0" [16a76d66-88e5-4dc9-9fe6-b62f1b24b2be] Pending
	I1216 06:14:53.463267 1600247 system_pods.go:89] "csi-hostpath-resizer-0" [25bae709-4f76-4545-8659-71e1ae91246f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1216 06:14:53.463272 1600247 system_pods.go:89] "csi-hostpathplugin-ds9r9" [d04f9e6e-41b7-4f6c-86f1-17c1472977bd] Pending
	I1216 06:14:53.463276 1600247 system_pods.go:89] "etcd-addons-142606" [ba28f010-7a70-408c-b883-ac04c6332042] Running
	I1216 06:14:53.463281 1600247 system_pods.go:89] "kindnet-t8fqq" [46de864f-3035-46c4-a475-f9d951f1d51c] Running
	I1216 06:14:53.463285 1600247 system_pods.go:89] "kube-apiserver-addons-142606" [40b526a7-7c2d-46af-99b6-30b1e314240c] Running
	I1216 06:14:53.463296 1600247 system_pods.go:89] "kube-controller-manager-addons-142606" [06aa3b93-c353-45ad-9f5d-41111af2811b] Running
	I1216 06:14:53.463303 1600247 system_pods.go:89] "kube-ingress-dns-minikube" [95e2cd7e-efb8-4796-a7df-017a5f674494] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1216 06:14:53.463315 1600247 system_pods.go:89] "kube-proxy-g5n5p" [68410934-d306-4372-b4f9-6b768fa1f3a1] Running
	I1216 06:14:53.463321 1600247 system_pods.go:89] "kube-scheduler-addons-142606" [9239a891-6dc3-4a9a-bc11-ec5c1ad9d174] Running
	I1216 06:14:53.463327 1600247 system_pods.go:89] "metrics-server-85b7d694d7-t6mbs" [8d8eeda6-d813-458b-807c-e88c9a1a0462] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 06:14:53.463334 1600247 system_pods.go:89] "nvidia-device-plugin-daemonset-w4pvk" [bbe25eae-b43b-4904-bd63-a070c971d855] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1216 06:14:53.463344 1600247 system_pods.go:89] "registry-6b586f9694-prj95" [839dbbf2-5df3-4b37-903a-7afbee677045] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 06:14:53.463349 1600247 system_pods.go:89] "registry-creds-764b6fb674-8vxwt" [42db279c-e8af-4665-a64c-91e4804c2b00] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1216 06:14:53.463356 1600247 system_pods.go:89] "registry-proxy-qh7wq" [bc3adfb9-13a5-45c6-9047-8a229431de86] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1216 06:14:53.463363 1600247 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ljkmz" [eee045ce-7688-4414-b1bc-d826f5823500] Pending
	I1216 06:14:53.463370 1600247 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ntp5r" [1c150a9c-ecf0-4b0a-915f-2c11086617cb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 06:14:53.463378 1600247 system_pods.go:89] "storage-provisioner" [f7eb6858-f3fe-49df-9b50-ed902bf3de6f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:14:53.463402 1600247 retry.go:31] will retry after 459.795537ms: missing components: kube-dns
	I1216 06:14:53.601116 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:53.735582 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:53.736127 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:53.843035 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:53.944938 1600247 system_pods.go:86] 19 kube-system pods found
	I1216 06:14:53.945022 1600247 system_pods.go:89] "coredns-66bc5c9577-hzh7x" [4532cfd3-202b-404f-98e9-88793a3557e5] Running
	I1216 06:14:53.945049 1600247 system_pods.go:89] "csi-hostpath-attacher-0" [16a76d66-88e5-4dc9-9fe6-b62f1b24b2be] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1216 06:14:53.945091 1600247 system_pods.go:89] "csi-hostpath-resizer-0" [25bae709-4f76-4545-8659-71e1ae91246f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1216 06:14:53.945120 1600247 system_pods.go:89] "csi-hostpathplugin-ds9r9" [d04f9e6e-41b7-4f6c-86f1-17c1472977bd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1216 06:14:53.945144 1600247 system_pods.go:89] "etcd-addons-142606" [ba28f010-7a70-408c-b883-ac04c6332042] Running
	I1216 06:14:53.945171 1600247 system_pods.go:89] "kindnet-t8fqq" [46de864f-3035-46c4-a475-f9d951f1d51c] Running
	I1216 06:14:53.945205 1600247 system_pods.go:89] "kube-apiserver-addons-142606" [40b526a7-7c2d-46af-99b6-30b1e314240c] Running
	I1216 06:14:53.945231 1600247 system_pods.go:89] "kube-controller-manager-addons-142606" [06aa3b93-c353-45ad-9f5d-41111af2811b] Running
	I1216 06:14:53.945257 1600247 system_pods.go:89] "kube-ingress-dns-minikube" [95e2cd7e-efb8-4796-a7df-017a5f674494] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1216 06:14:53.945281 1600247 system_pods.go:89] "kube-proxy-g5n5p" [68410934-d306-4372-b4f9-6b768fa1f3a1] Running
	I1216 06:14:53.945318 1600247 system_pods.go:89] "kube-scheduler-addons-142606" [9239a891-6dc3-4a9a-bc11-ec5c1ad9d174] Running
	I1216 06:14:53.945349 1600247 system_pods.go:89] "metrics-server-85b7d694d7-t6mbs" [8d8eeda6-d813-458b-807c-e88c9a1a0462] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 06:14:53.945376 1600247 system_pods.go:89] "nvidia-device-plugin-daemonset-w4pvk" [bbe25eae-b43b-4904-bd63-a070c971d855] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1216 06:14:53.945403 1600247 system_pods.go:89] "registry-6b586f9694-prj95" [839dbbf2-5df3-4b37-903a-7afbee677045] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 06:14:53.945435 1600247 system_pods.go:89] "registry-creds-764b6fb674-8vxwt" [42db279c-e8af-4665-a64c-91e4804c2b00] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1216 06:14:53.945463 1600247 system_pods.go:89] "registry-proxy-qh7wq" [bc3adfb9-13a5-45c6-9047-8a229431de86] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1216 06:14:53.945491 1600247 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ljkmz" [eee045ce-7688-4414-b1bc-d826f5823500] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 06:14:53.945521 1600247 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ntp5r" [1c150a9c-ecf0-4b0a-915f-2c11086617cb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 06:14:53.945565 1600247 system_pods.go:89] "storage-provisioner" [f7eb6858-f3fe-49df-9b50-ed902bf3de6f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:14:53.945589 1600247 system_pods.go:126] duration metric: took 1.056387317s to wait for k8s-apps to be running ...
	I1216 06:14:53.945617 1600247 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 06:14:53.945696 1600247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 06:14:54.051751 1600247 system_svc.go:56] duration metric: took 106.125686ms WaitForService to wait for kubelet
	I1216 06:14:54.051826 1600247 kubeadm.go:587] duration metric: took 43.289639094s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 06:14:54.051862 1600247 node_conditions.go:102] verifying NodePressure condition ...
	I1216 06:14:54.063011 1600247 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1216 06:14:54.063198 1600247 node_conditions.go:123] node cpu capacity is 2
	I1216 06:14:54.063231 1600247 node_conditions.go:105] duration metric: took 11.344948ms to run NodePressure ...
	I1216 06:14:54.063272 1600247 start.go:242] waiting for startup goroutines ...
	I1216 06:14:54.067613 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:54.235842 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:54.236529 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:54.352874 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:54.558491 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:54.735851 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:54.736536 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:54.841308 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:55.058207 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:55.236110 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:55.236693 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:55.340676 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:55.558343 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:55.736068 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:55.736385 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:55.840729 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:56.059114 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:56.236349 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:56.237670 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:56.346971 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:56.563490 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:56.737561 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:56.737963 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:56.840584 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:57.066043 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:57.237693 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:57.238160 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:57.340256 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:57.560269 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:57.741142 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:57.742216 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:57.840903 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:58.059201 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:58.236606 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:58.237033 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:58.343410 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:58.558435 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:58.735289 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:58.735444 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:58.840532 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:59.058283 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:59.235299 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:59.235476 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:59.340552 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:14:59.557672 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:14:59.735067 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:14:59.735747 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:14:59.841587 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:00.071144 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:00.238290 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:00.252036 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:00.370049 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:00.559867 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:00.735667 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:00.736046 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:00.840936 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:01.059533 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:01.236949 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:01.237425 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:01.341126 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:01.558504 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:01.737322 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:01.737785 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:01.840616 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:02.058836 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:02.236397 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:02.236927 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:02.340719 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:02.559136 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:02.735947 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:02.736196 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:02.841122 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:03.058902 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:03.235865 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:03.235990 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:03.340675 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:03.558412 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:03.735648 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:03.736976 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:03.840701 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:04.057587 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:04.235948 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:04.235974 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:04.339988 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:04.558862 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:04.736048 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:04.736459 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:04.841016 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:05.059469 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:05.236706 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:05.237060 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:05.340171 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:05.559049 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:05.734827 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:05.735164 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:05.840740 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:06.058134 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:06.235626 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:06.237301 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:06.340815 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:06.562081 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:06.736365 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:06.737830 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:06.841178 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:07.059936 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:07.238710 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:07.249083 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:07.339789 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:07.559465 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:07.737101 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:07.737426 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:07.840973 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:08.058861 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:08.236592 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:08.237870 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:08.340374 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:08.559786 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:08.754990 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:08.755197 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:08.844696 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:09.060946 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:09.237486 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:09.238258 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:09.340631 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:09.559148 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:09.736392 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:09.736591 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:09.854846 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:10.059286 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:10.236337 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:10.236770 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:10.340295 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:10.563001 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:10.735176 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:10.735974 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:10.840031 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:11.058487 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:11.235512 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:11.236535 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:11.341827 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:11.559299 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:11.736077 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:11.736409 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:11.843539 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:12.058718 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:12.234908 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:12.235102 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:12.344460 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:12.594830 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:12.735445 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:12.735679 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:12.840684 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:13.057840 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:13.236155 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:13.236437 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:13.346468 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:13.558447 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:13.735126 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:13.735301 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:13.841608 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:14.058853 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:14.236304 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:14.236781 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:14.340338 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:14.557989 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:14.736413 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:14.736965 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:14.841371 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:15.058199 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:15.243081 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:15.243248 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:15.341744 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:15.558908 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:15.736426 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:15.737829 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:15.840323 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:16.058541 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:16.235628 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:16.236275 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:16.340555 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:16.557796 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:16.735995 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:16.737263 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:16.840181 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:17.058775 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:17.235965 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:17.236551 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:17.340512 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:17.559648 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:17.735407 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:17.736107 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:17.840303 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:18.058916 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:18.234702 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:18.235103 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:18.341079 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:18.559612 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:18.736984 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:18.737353 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:18.840908 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:19.058377 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:19.235177 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:19.235497 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:19.340590 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:19.559796 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:19.734868 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:19.735385 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:19.843342 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:20.058236 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:20.235328 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:20.235539 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:20.340139 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:20.566799 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:20.735948 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:20.736187 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:20.840288 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:21.057791 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:21.234934 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:21.235768 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:21.339762 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:21.558442 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:21.735494 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:21.738318 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:21.840887 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:22.059115 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:22.235298 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:22.235319 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:22.340336 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:22.558015 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:22.735209 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:22.736378 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:22.840556 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:23.058375 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:23.235901 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:23.236100 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:23.339876 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:23.558844 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:23.735851 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:23.736116 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:23.840563 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:24.058468 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:24.234615 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:24.234738 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 06:15:24.341527 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:24.563350 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:24.735115 1600247 kapi.go:107] duration metric: took 1m7.504103062s to wait for kubernetes.io/minikube-addons=registry ...
	I1216 06:15:24.735211 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:24.842783 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:25.060654 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:25.238439 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:25.340058 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:25.558896 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:25.735763 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:25.840680 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:26.058614 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:26.234600 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:26.340134 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:26.557665 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:26.735490 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:26.863961 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:27.059178 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:27.234408 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:27.340898 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:27.559324 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:27.734979 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:27.840619 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:28.059023 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:28.234971 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:28.340244 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:28.557964 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:28.735268 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:28.840922 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:29.059626 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:29.234959 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:29.340224 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:29.557752 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:29.735401 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:29.840727 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:30.077769 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:30.235659 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:30.341158 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:30.564577 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:30.735034 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:30.843470 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:31.058806 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:31.234703 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:31.340866 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:31.558606 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:31.735789 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:31.839830 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:32.058651 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:32.235072 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:32.340180 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:32.557394 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:32.734614 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:32.840663 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:33.060784 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:33.235544 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:33.340627 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:33.558687 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:33.737305 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:33.840620 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:34.059210 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:34.234880 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:34.339985 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:34.558573 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:34.738591 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:34.841074 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:35.059230 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:35.235452 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:35.340777 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:35.558526 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:35.735074 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:35.839968 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:36.058854 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:36.235241 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:36.340426 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:36.557763 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:36.734821 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:36.840224 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:37.057904 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:37.235239 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:37.347151 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:37.559343 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:37.737518 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:37.840703 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:38.066300 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:38.241604 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:38.340532 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:38.560894 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:38.740676 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:38.841541 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:39.060128 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:39.235098 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:39.342549 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:39.558066 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:39.734948 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:39.841920 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:40.059884 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:40.235366 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:40.343675 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:40.557778 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:40.737046 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:40.843672 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:41.059027 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:41.236899 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:41.340849 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 06:15:41.559110 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:41.742308 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:41.842299 1600247 kapi.go:107] duration metric: took 1m21.005489102s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1216 06:15:41.846107 1600247 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-142606 cluster.
	I1216 06:15:41.849002 1600247 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1216 06:15:41.851945 1600247 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1216 06:15:42.059112 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:42.235747 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:42.558233 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:42.734437 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:43.057445 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:43.235002 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:43.558715 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:43.735621 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:44.064149 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:44.234673 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:44.559212 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:44.734855 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:45.064046 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:45.238759 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:45.558816 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:45.735156 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:46.057745 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:46.235276 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:46.557968 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:46.735549 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:47.057892 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:47.240887 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:47.558343 1600247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 06:15:47.734460 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:48.058060 1600247 kapi.go:107] duration metric: took 1m30.503692488s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1216 06:15:48.234194 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:48.734511 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:49.235390 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:49.735020 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:50.235427 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:50.735608 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:51.234572 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:51.735865 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:52.234588 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:52.735036 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:53.234244 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:53.734815 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:54.234644 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:54.734636 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:55.235532 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:55.734872 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:56.234802 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:56.736169 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:57.234775 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:57.734758 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:58.235168 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:58.735126 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:59.235723 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:15:59.735139 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:16:00.261924 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:16:00.734940 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:16:01.235232 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:16:01.734868 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:16:02.234883 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:16:02.734463 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:16:03.234802 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:16:03.734130 1600247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 06:16:04.235422 1600247 kapi.go:107] duration metric: took 1m47.004411088s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1216 06:16:04.240359 1600247 out.go:179] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, inspektor-gadget, storage-provisioner, registry-creds, cloud-spanner, ingress-dns, metrics-server, storage-provisioner-rancher, yakd, default-storageclass, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I1216 06:16:04.243853 1600247 addons.go:530] duration metric: took 1m53.480765171s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin inspektor-gadget storage-provisioner registry-creds cloud-spanner ingress-dns metrics-server storage-provisioner-rancher yakd default-storageclass volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I1216 06:16:04.243918 1600247 start.go:247] waiting for cluster config update ...
	I1216 06:16:04.243941 1600247 start.go:256] writing updated cluster config ...
	I1216 06:16:04.245241 1600247 ssh_runner.go:195] Run: rm -f paused
	I1216 06:16:04.250321 1600247 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 06:16:04.261109 1600247 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hzh7x" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:16:04.275578 1600247 pod_ready.go:94] pod "coredns-66bc5c9577-hzh7x" is "Ready"
	I1216 06:16:04.275659 1600247 pod_ready.go:86] duration metric: took 14.522712ms for pod "coredns-66bc5c9577-hzh7x" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:16:04.281118 1600247 pod_ready.go:83] waiting for pod "etcd-addons-142606" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:16:04.289839 1600247 pod_ready.go:94] pod "etcd-addons-142606" is "Ready"
	I1216 06:16:04.289913 1600247 pod_ready.go:86] duration metric: took 8.721699ms for pod "etcd-addons-142606" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:16:04.293138 1600247 pod_ready.go:83] waiting for pod "kube-apiserver-addons-142606" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:16:04.301578 1600247 pod_ready.go:94] pod "kube-apiserver-addons-142606" is "Ready"
	I1216 06:16:04.301653 1600247 pod_ready.go:86] duration metric: took 8.442771ms for pod "kube-apiserver-addons-142606" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:16:04.304445 1600247 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-142606" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:16:04.653970 1600247 pod_ready.go:94] pod "kube-controller-manager-addons-142606" is "Ready"
	I1216 06:16:04.654005 1600247 pod_ready.go:86] duration metric: took 349.482904ms for pod "kube-controller-manager-addons-142606" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:16:04.855557 1600247 pod_ready.go:83] waiting for pod "kube-proxy-g5n5p" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:16:05.254351 1600247 pod_ready.go:94] pod "kube-proxy-g5n5p" is "Ready"
	I1216 06:16:05.254384 1600247 pod_ready.go:86] duration metric: took 398.800433ms for pod "kube-proxy-g5n5p" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:16:05.455018 1600247 pod_ready.go:83] waiting for pod "kube-scheduler-addons-142606" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:16:05.855052 1600247 pod_ready.go:94] pod "kube-scheduler-addons-142606" is "Ready"
	I1216 06:16:05.855078 1600247 pod_ready.go:86] duration metric: took 400.033111ms for pod "kube-scheduler-addons-142606" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:16:05.855120 1600247 pod_ready.go:40] duration metric: took 1.604739835s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 06:16:05.919372 1600247 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1216 06:16:05.923168 1600247 out.go:179] * Done! kubectl is now configured to use "addons-142606" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 16 06:16:06 addons-142606 crio[829]: time="2025-12-16T06:16:06.114518582Z" level=info msg="Stopped pod sandbox (already stopped): 121b850aa8ec1a38ad2df2577c1e498bfa4afc89966363d7e6e95605e91c02ae" id=26896ff1-1731-4668-ad3b-8cbdf26fe7e9 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 16 06:16:06 addons-142606 crio[829]: time="2025-12-16T06:16:06.115005512Z" level=info msg="Removing pod sandbox: 121b850aa8ec1a38ad2df2577c1e498bfa4afc89966363d7e6e95605e91c02ae" id=c4881452-ae4c-4854-a2b4-53d3abdffd4d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 16 06:16:06 addons-142606 crio[829]: time="2025-12-16T06:16:06.121143144Z" level=info msg="Removed pod sandbox: 121b850aa8ec1a38ad2df2577c1e498bfa4afc89966363d7e6e95605e91c02ae" id=c4881452-ae4c-4854-a2b4-53d3abdffd4d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 16 06:16:06 addons-142606 crio[829]: time="2025-12-16T06:16:06.988796033Z" level=info msg="Running pod sandbox: default/busybox/POD" id=0dbed8e4-1f48-4b3e-82cb-67c6d48a0569 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 06:16:06 addons-142606 crio[829]: time="2025-12-16T06:16:06.988862421Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 06:16:06 addons-142606 crio[829]: time="2025-12-16T06:16:06.995604145Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:adc631c8ab7367060af9a588cfb0e25a7bcbd12d8c8aebead05b571cd0b9eb25 UID:16a10a01-65c5-40c7-b5ec-ed523d74b116 NetNS:/var/run/netns/0131a565-11a0-4b37-8906-a83633899d3f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40018b8d38}] Aliases:map[]}"
	Dec 16 06:16:06 addons-142606 crio[829]: time="2025-12-16T06:16:06.995784036Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 16 06:16:07 addons-142606 crio[829]: time="2025-12-16T06:16:07.012396132Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:adc631c8ab7367060af9a588cfb0e25a7bcbd12d8c8aebead05b571cd0b9eb25 UID:16a10a01-65c5-40c7-b5ec-ed523d74b116 NetNS:/var/run/netns/0131a565-11a0-4b37-8906-a83633899d3f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40018b8d38}] Aliases:map[]}"
	Dec 16 06:16:07 addons-142606 crio[829]: time="2025-12-16T06:16:07.013669582Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 16 06:16:07 addons-142606 crio[829]: time="2025-12-16T06:16:07.018101292Z" level=info msg="Ran pod sandbox adc631c8ab7367060af9a588cfb0e25a7bcbd12d8c8aebead05b571cd0b9eb25 with infra container: default/busybox/POD" id=0dbed8e4-1f48-4b3e-82cb-67c6d48a0569 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 06:16:07 addons-142606 crio[829]: time="2025-12-16T06:16:07.019660692Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ee7b3db2-4d48-4d2c-b8f9-5b82b70c1168 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:16:07 addons-142606 crio[829]: time="2025-12-16T06:16:07.019894164Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=ee7b3db2-4d48-4d2c-b8f9-5b82b70c1168 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:16:07 addons-142606 crio[829]: time="2025-12-16T06:16:07.019950509Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=ee7b3db2-4d48-4d2c-b8f9-5b82b70c1168 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:16:07 addons-142606 crio[829]: time="2025-12-16T06:16:07.021906698Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5994a1d0-c4cf-42e5-99f7-14604a51a0d4 name=/runtime.v1.ImageService/PullImage
	Dec 16 06:16:07 addons-142606 crio[829]: time="2025-12-16T06:16:07.023891457Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 16 06:16:09 addons-142606 crio[829]: time="2025-12-16T06:16:09.019752388Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=5994a1d0-c4cf-42e5-99f7-14604a51a0d4 name=/runtime.v1.ImageService/PullImage
	Dec 16 06:16:09 addons-142606 crio[829]: time="2025-12-16T06:16:09.02091162Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d54b4a07-a17f-4f24-adfd-846b574f104c name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:16:09 addons-142606 crio[829]: time="2025-12-16T06:16:09.023598065Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=403371cd-f3f6-43b5-9351-2c01c17debfa name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:16:09 addons-142606 crio[829]: time="2025-12-16T06:16:09.03804062Z" level=info msg="Creating container: default/busybox/busybox" id=8dcda45b-f7d2-4b70-994e-b8669c760f2f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 06:16:09 addons-142606 crio[829]: time="2025-12-16T06:16:09.038207809Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 06:16:09 addons-142606 crio[829]: time="2025-12-16T06:16:09.057481463Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 06:16:09 addons-142606 crio[829]: time="2025-12-16T06:16:09.058038835Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 06:16:09 addons-142606 crio[829]: time="2025-12-16T06:16:09.099259985Z" level=info msg="Created container a39808548c99c7d69a92dd802d81f61e56a337ef6a9b88dd5f0e442da03d0c1c: default/busybox/busybox" id=8dcda45b-f7d2-4b70-994e-b8669c760f2f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 06:16:09 addons-142606 crio[829]: time="2025-12-16T06:16:09.100306913Z" level=info msg="Starting container: a39808548c99c7d69a92dd802d81f61e56a337ef6a9b88dd5f0e442da03d0c1c" id=c4ea4aa9-0294-48b9-bb19-cc6caf00e543 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 06:16:09 addons-142606 crio[829]: time="2025-12-16T06:16:09.103318259Z" level=info msg="Started container" PID=4989 containerID=a39808548c99c7d69a92dd802d81f61e56a337ef6a9b88dd5f0e442da03d0c1c description=default/busybox/busybox id=c4ea4aa9-0294-48b9-bb19-cc6caf00e543 name=/runtime.v1.RuntimeService/StartContainer sandboxID=adc631c8ab7367060af9a588cfb0e25a7bcbd12d8c8aebead05b571cd0b9eb25
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	a39808548c99c       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          9 seconds ago        Running             busybox                                  0                   adc631c8ab736       busybox                                     default
	b5e6638bf6970       registry.k8s.io/ingress-nginx/controller@sha256:75494e2145fbebf362d24e24e9285b7fbb7da8783ab272092e3126e24ee4776d                             15 seconds ago       Running             controller                               0                   32fcdec6fe4c9       ingress-nginx-controller-85d4c799dd-lswbn   ingress-nginx
	6703e84bcca40       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          31 seconds ago       Running             csi-snapshotter                          0                   08f5c73b4d608       csi-hostpathplugin-ds9r9                    kube-system
	88067168cfc83       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          33 seconds ago       Running             csi-provisioner                          0                   08f5c73b4d608       csi-hostpathplugin-ds9r9                    kube-system
	6731eaf9efe44       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            34 seconds ago       Running             liveness-probe                           0                   08f5c73b4d608       csi-hostpathplugin-ds9r9                    kube-system
	0fee244cfec70       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           35 seconds ago       Running             hostpath                                 0                   08f5c73b4d608       csi-hostpathplugin-ds9r9                    kube-system
	28c54e5bde756       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                37 seconds ago       Running             node-driver-registrar                    0                   08f5c73b4d608       csi-hostpathplugin-ds9r9                    kube-system
	d8f84e055fa4a       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 38 seconds ago       Running             gcp-auth                                 0                   29532cbb89ace       gcp-auth-78565c9fb4-cvfrs                   gcp-auth
	3e14d8c6a72b9       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:fadc7bf59b69965b6707edb68022bed4f55a1f99b15f7acd272793e48f171496                            41 seconds ago       Running             gadget                                   0                   dee9e4bf0489f       gadget-gr47j                                gadget
	cdb9ba35fabe7       e8105550077f5c6c8e92536651451107053f0e41635396ee42aef596441c179a                                                                             41 seconds ago       Exited              patch                                    2                   ee60df9288f3b       gcp-auth-certs-patch-st7hs                  gcp-auth
	165110b3c1752       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   44 seconds ago       Running             csi-external-health-monitor-controller   0                   08f5c73b4d608       csi-hostpathplugin-ds9r9                    kube-system
	c5f817c74f04c       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             46 seconds ago       Running             csi-attacher                             0                   6c4e5d8283fd0       csi-hostpath-attacher-0                     kube-system
	83c2340bd3725       e8105550077f5c6c8e92536651451107053f0e41635396ee42aef596441c179a                                                                             47 seconds ago       Exited              patch                                    1                   4a3beda6b5f20       ingress-nginx-admission-patch-2jxxc         ingress-nginx
	978cf1e646b5f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   47 seconds ago       Exited              create                                   0                   48347a6d56575       ingress-nginx-admission-create-n8gg2        ingress-nginx
	0d582f614e063       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     48 seconds ago       Running             nvidia-device-plugin-ctr                 0                   6b1f76b9dd2c4       nvidia-device-plugin-daemonset-w4pvk        kube-system
	7818fd4ffad1e       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              53 seconds ago       Running             csi-resizer                              0                   a59d38e23d2df       csi-hostpath-resizer-0                      kube-system
	a433ea848c0b6       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              55 seconds ago       Running             registry-proxy                           0                   ae31fa60b7bf5       registry-proxy-qh7wq                        kube-system
	161c43bb0c1f0       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   6098fdaacb978       snapshot-controller-7d9fbc56b8-ljkmz        kube-system
	794d61cf642b9       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   d164098573082       yakd-dashboard-5ff678cb9-g8xrb              yakd-dashboard
	8abe529e41335       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   70ea98e42e819       snapshot-controller-7d9fbc56b8-ntp5r        kube-system
	a9c9065484348       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   cb2c98f715668       registry-6b586f9694-prj95                   kube-system
	b44a527d8f947       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   8264b5ab0fda8       local-path-provisioner-648f6765c9-p55rr     local-path-storage
	c26c874f20ada       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   056e765782b41       kube-ingress-dns-minikube                   kube-system
	a6a3cfe490f36       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               About a minute ago   Running             cloud-spanner-emulator                   0                   f66f4a99edf04       cloud-spanner-emulator-5bdddb765-fdxwf      default
	ce1c26a19229b       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   ea4f10ad66d82       metrics-server-85b7d694d7-t6mbs             kube-system
	05deafa86a477       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   2f033fd13c808       storage-provisioner                         kube-system
	3ba24e9ad28c6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   812830e954ef1       coredns-66bc5c9577-hzh7x                    kube-system
	420ec82bb1093       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                                                             2 minutes ago        Running             kube-proxy                               0                   62c6b2b311cdc       kube-proxy-g5n5p                            kube-system
	200b85d246fd0       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   3fb6e285e445f       kindnet-t8fqq                               kube-system
	df77467f393ab       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             2 minutes ago        Running             etcd                                     0                   d5ff8aa59ae82       etcd-addons-142606                          kube-system
	579811cebcc83       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                                                             2 minutes ago        Running             kube-scheduler                           0                   80c1f7eac1e9f       kube-scheduler-addons-142606                kube-system
	f245307e594fb       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                                                             2 minutes ago        Running             kube-controller-manager                  0                   4603e2976e9c2       kube-controller-manager-addons-142606       kube-system
	c9eb26e694306       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                                                             2 minutes ago        Running             kube-apiserver                           0                   630d75d0c4d56       kube-apiserver-addons-142606                kube-system
	
	
	==> coredns [3ba24e9ad28c6c3240f0dfc5f6682f61f94d490a15253cca8ed8af56ecef50b8] <==
	[INFO] 10.244.0.10:57034 - 26904 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000222295s
	[INFO] 10.244.0.10:57034 - 45020 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002249033s
	[INFO] 10.244.0.10:57034 - 8624 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.00246492s
	[INFO] 10.244.0.10:57034 - 21401 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000135124s
	[INFO] 10.244.0.10:57034 - 18084 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000094778s
	[INFO] 10.244.0.10:38179 - 30117 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000155005s
	[INFO] 10.244.0.10:38179 - 29895 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000082413s
	[INFO] 10.244.0.10:50531 - 46224 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000133523s
	[INFO] 10.244.0.10:50531 - 46043 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000158689s
	[INFO] 10.244.0.10:49525 - 9094 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000109564s
	[INFO] 10.244.0.10:49525 - 8906 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000137486s
	[INFO] 10.244.0.10:45727 - 42048 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001446407s
	[INFO] 10.244.0.10:45727 - 41859 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001497394s
	[INFO] 10.244.0.10:53308 - 44634 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000123464s
	[INFO] 10.244.0.10:53308 - 44814 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000464973s
	[INFO] 10.244.0.20:57954 - 25106 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000217644s
	[INFO] 10.244.0.20:58198 - 50745 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00017249s
	[INFO] 10.244.0.20:49658 - 5258 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000137199s
	[INFO] 10.244.0.20:51292 - 50006 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000125105s
	[INFO] 10.244.0.20:45806 - 53631 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000303987s
	[INFO] 10.244.0.20:60210 - 9172 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000320316s
	[INFO] 10.244.0.20:34277 - 39597 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00263695s
	[INFO] 10.244.0.20:43917 - 16084 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003460828s
	[INFO] 10.244.0.20:35135 - 28426 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002211511s
	[INFO] 10.244.0.20:60089 - 59835 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001838215s
	
	
	==> describe nodes <==
	Name:               addons-142606
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-142606
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f
	                    minikube.k8s.io/name=addons-142606
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T06_14_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-142606
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-142606"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 06:14:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-142606
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 06:16:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 06:16:10 +0000   Tue, 16 Dec 2025 06:13:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 06:16:10 +0000   Tue, 16 Dec 2025 06:13:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 06:16:10 +0000   Tue, 16 Dec 2025 06:13:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 06:16:10 +0000   Tue, 16 Dec 2025 06:14:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-142606
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                a2823150-fbe0-44c8-b17f-fc2660ac30ce
	  Boot ID:                    c02b8f3a-b639-46a9-b38c-18c198a7a8c0
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  default                     cloud-spanner-emulator-5bdddb765-fdxwf       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  gadget                      gadget-gr47j                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  gcp-auth                    gcp-auth-78565c9fb4-cvfrs                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-lswbn    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m2s
	  kube-system                 coredns-66bc5c9577-hzh7x                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m8s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 csi-hostpathplugin-ds9r9                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 etcd-addons-142606                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m15s
	  kube-system                 kindnet-t8fqq                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m8s
	  kube-system                 kube-apiserver-addons-142606                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 kube-controller-manager-addons-142606        200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-proxy-g5n5p                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 kube-scheduler-addons-142606                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 metrics-server-85b7d694d7-t6mbs              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m4s
	  kube-system                 nvidia-device-plugin-daemonset-w4pvk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 registry-6b586f9694-prj95                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 registry-creds-764b6fb674-8vxwt              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 registry-proxy-qh7wq                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 snapshot-controller-7d9fbc56b8-ljkmz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 snapshot-controller-7d9fbc56b8-ntp5r         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  local-path-storage          local-path-provisioner-648f6765c9-p55rr      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-g8xrb               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m6s                   kube-proxy       
	  Normal   Starting                 2m20s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m20s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m20s (x8 over 2m20s)  kubelet          Node addons-142606 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m20s (x8 over 2m20s)  kubelet          Node addons-142606 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m20s (x8 over 2m20s)  kubelet          Node addons-142606 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m13s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m13s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m13s                  kubelet          Node addons-142606 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m13s                  kubelet          Node addons-142606 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m13s                  kubelet          Node addons-142606 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m9s                   node-controller  Node addons-142606 event: Registered Node addons-142606 in Controller
	  Normal   NodeReady                87s                    kubelet          Node addons-142606 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec16 06:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec16 06:13] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [df77467f393ab9f56a05b6bda0282ec85b78e7479554e9a66b909f66844386c1] <==
	{"level":"warn","ts":"2025-12-16T06:14:01.864289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:01.886509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:01.916947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:01.951084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:01.980894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:02.006508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:02.018143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:02.034110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:02.054085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:02.065828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:02.092555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:02.109794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:02.125731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:02.140718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:02.190266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:02.219304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:02.235742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:02.272560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:02.308690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:17.826441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:17.850321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:40.268998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:40.296015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:40.312463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T06:14:40.333996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56034","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [d8f84e055fa4ad5627655e68e01268675ea80482afc04b1ce4728b8d18407e57] <==
	2025/12/16 06:15:40 GCP Auth Webhook started!
	2025/12/16 06:16:06 Ready to marshal response ...
	2025/12/16 06:16:06 Ready to write response ...
	2025/12/16 06:16:06 Ready to marshal response ...
	2025/12/16 06:16:06 Ready to write response ...
	2025/12/16 06:16:06 Ready to marshal response ...
	2025/12/16 06:16:06 Ready to write response ...
	
	
	==> kernel <==
	 06:16:19 up  8:58,  0 user,  load average: 1.93, 1.58, 1.87
	Linux addons-142606 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [200b85d246fd05ce6515e2957bed839b1cd509fbd6973e7fb7b76cfc92dc0e92] <==
	E1216 06:14:42.358247       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1216 06:14:42.422873       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1216 06:14:42.422873       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1216 06:14:42.422961       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1216 06:14:44.023024       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 06:14:44.023054       1 metrics.go:72] Registering metrics
	I1216 06:14:44.023126       1 controller.go:711] "Syncing nftables rules"
	I1216 06:14:52.362189       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 06:14:52.362227       1 main.go:301] handling current node
	I1216 06:15:02.358079       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 06:15:02.358109       1 main.go:301] handling current node
	I1216 06:15:12.358167       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 06:15:12.358280       1 main.go:301] handling current node
	I1216 06:15:22.358130       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 06:15:22.358160       1 main.go:301] handling current node
	I1216 06:15:32.357616       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 06:15:32.357650       1 main.go:301] handling current node
	I1216 06:15:42.358047       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 06:15:42.358103       1 main.go:301] handling current node
	I1216 06:15:52.360554       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 06:15:52.360593       1 main.go:301] handling current node
	I1216 06:16:02.362594       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 06:16:02.362628       1 main.go:301] handling current node
	I1216 06:16:12.359562       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 06:16:12.359601       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c9eb26e694306fb2badad1b156e8c43cd7669aeea899bdaf4f5005d8c36ce56e] <==
	I1216 06:14:17.499671       1 alloc.go:328] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.100.127.82"}
	W1216 06:14:17.826337       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1216 06:14:17.841312       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1216 06:14:20.687562       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.105.230.18"}
	W1216 06:14:40.268899       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1216 06:14:40.287914       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1216 06:14:40.312418       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1216 06:14:40.327490       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1216 06:14:52.702339       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.230.18:443: connect: connection refused
	E1216 06:14:52.702388       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.230.18:443: connect: connection refused" logger="UnhandledError"
	W1216 06:14:52.702829       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.230.18:443: connect: connection refused
	E1216 06:14:52.702860       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.230.18:443: connect: connection refused" logger="UnhandledError"
	W1216 06:14:52.788800       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.230.18:443: connect: connection refused
	E1216 06:14:52.788847       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.230.18:443: connect: connection refused" logger="UnhandledError"
	E1216 06:15:07.409179       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.210.185:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.210.185:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.210.185:443: connect: connection refused" logger="UnhandledError"
	W1216 06:15:07.410281       1 handler_proxy.go:99] no RequestInfo found in the context
	E1216 06:15:07.410371       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1216 06:15:07.489650       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1216 06:15:07.496843       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1216 06:16:16.979741       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:39880: use of closed network connection
	E1216 06:16:17.208831       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:39912: use of closed network connection
	E1216 06:16:17.347778       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:39928: use of closed network connection
	
	
	==> kube-controller-manager [f245307e594fbb88a44a0deec519111b1a88c9ff3bfc81884eb0fff4916d96b2] <==
	I1216 06:14:10.295766       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1216 06:14:10.295836       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1216 06:14:10.295767       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1216 06:14:10.295880       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1216 06:14:10.295967       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1216 06:14:10.296301       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1216 06:14:10.296881       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1216 06:14:10.298108       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1216 06:14:10.298351       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1216 06:14:10.298377       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1216 06:14:10.298399       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1216 06:14:10.299792       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1216 06:14:10.305033       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1216 06:14:10.344670       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 06:14:10.344884       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1216 06:14:10.344904       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1216 06:14:15.953975       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1216 06:14:40.261093       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 06:14:40.261255       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1216 06:14:40.261315       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1216 06:14:40.293651       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1216 06:14:40.298938       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1216 06:14:40.361565       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 06:14:40.399958       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 06:14:55.253233       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [420ec82bb10934672e188d3b5c75b4015e4ddff1993a56b334897407409b4e9b] <==
	I1216 06:14:12.238672       1 server_linux.go:53] "Using iptables proxy"
	I1216 06:14:12.358328       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1216 06:14:12.459064       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 06:14:12.459102       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1216 06:14:12.459185       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 06:14:12.560556       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 06:14:12.560604       1 server_linux.go:132] "Using iptables Proxier"
	I1216 06:14:12.578309       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 06:14:12.578648       1 server.go:527] "Version info" version="v1.34.2"
	I1216 06:14:12.578664       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 06:14:12.580042       1 config.go:200] "Starting service config controller"
	I1216 06:14:12.580052       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 06:14:12.580070       1 config.go:106] "Starting endpoint slice config controller"
	I1216 06:14:12.580074       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 06:14:12.580086       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 06:14:12.580090       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 06:14:12.580728       1 config.go:309] "Starting node config controller"
	I1216 06:14:12.580736       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 06:14:12.580742       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 06:14:12.680334       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1216 06:14:12.680367       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 06:14:12.680406       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [579811cebcc8368d345e24f90d842a2c3691b61c760bea541d93287864a6257a] <==
	I1216 06:14:02.386682       1 serving.go:386] Generated self-signed cert in-memory
	W1216 06:14:04.827633       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1216 06:14:04.827749       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1216 06:14:04.827783       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1216 06:14:04.827812       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1216 06:14:04.849723       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1216 06:14:04.850358       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 06:14:04.852414       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 06:14:04.852514       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1216 06:14:04.853984       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1216 06:14:04.854450       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1216 06:14:04.854512       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1216 06:14:06.054770       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 16 06:15:33 addons-142606 kubelet[1283]: I1216 06:15:33.978900    1283 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rz9mr\" (UniqueName: \"kubernetes.io/projected/9f5e67a3-be60-4667-915d-9f4de345bf19-kube-api-access-rz9mr\") pod \"9f5e67a3-be60-4667-915d-9f4de345bf19\" (UID: \"9f5e67a3-be60-4667-915d-9f4de345bf19\") "
	Dec 16 06:15:33 addons-142606 kubelet[1283]: I1216 06:15:33.980972    1283 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f5e67a3-be60-4667-915d-9f4de345bf19-kube-api-access-rz9mr" (OuterVolumeSpecName: "kube-api-access-rz9mr") pod "9f5e67a3-be60-4667-915d-9f4de345bf19" (UID: "9f5e67a3-be60-4667-915d-9f4de345bf19"). InnerVolumeSpecName "kube-api-access-rz9mr". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 16 06:15:34 addons-142606 kubelet[1283]: I1216 06:15:34.080235    1283 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rz9mr\" (UniqueName: \"kubernetes.io/projected/9f5e67a3-be60-4667-915d-9f4de345bf19-kube-api-access-rz9mr\") on node \"addons-142606\" DevicePath \"\""
	Dec 16 06:15:34 addons-142606 kubelet[1283]: I1216 06:15:34.663286    1283 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a3beda6b5f20ed54c696fce5a9bc75cab4c93a38e32c0cfbd1971f65e5dc0e4"
	Dec 16 06:15:37 addons-142606 kubelet[1283]: I1216 06:15:37.068408    1283 scope.go:117] "RemoveContainer" containerID="652d82e3b41391cacd8262c02deeaeb936df766c221cb7beecd5066928f23950"
	Dec 16 06:15:37 addons-142606 kubelet[1283]: I1216 06:15:37.677516    1283 scope.go:117] "RemoveContainer" containerID="652d82e3b41391cacd8262c02deeaeb936df766c221cb7beecd5066928f23950"
	Dec 16 06:15:37 addons-142606 kubelet[1283]: I1216 06:15:37.724256    1283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-gr47j" podStartSLOduration=65.125786423 podStartE2EDuration="1m21.724217786s" podCreationTimestamp="2025-12-16 06:14:16 +0000 UTC" firstStartedPulling="2025-12-16 06:15:20.804923982 +0000 UTC m=+74.881616482" lastFinishedPulling="2025-12-16 06:15:37.403355345 +0000 UTC m=+91.480047845" observedRunningTime="2025-12-16 06:15:37.700313855 +0000 UTC m=+91.777006364" watchObservedRunningTime="2025-12-16 06:15:37.724217786 +0000 UTC m=+91.800910294"
	Dec 16 06:15:38 addons-142606 kubelet[1283]: I1216 06:15:38.825734    1283 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrwjn\" (UniqueName: \"kubernetes.io/projected/64c1b9dc-0912-4695-af06-674137988f39-kube-api-access-hrwjn\") pod \"64c1b9dc-0912-4695-af06-674137988f39\" (UID: \"64c1b9dc-0912-4695-af06-674137988f39\") "
	Dec 16 06:15:38 addons-142606 kubelet[1283]: I1216 06:15:38.846546    1283 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64c1b9dc-0912-4695-af06-674137988f39-kube-api-access-hrwjn" (OuterVolumeSpecName: "kube-api-access-hrwjn") pod "64c1b9dc-0912-4695-af06-674137988f39" (UID: "64c1b9dc-0912-4695-af06-674137988f39"). InnerVolumeSpecName "kube-api-access-hrwjn". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 16 06:15:38 addons-142606 kubelet[1283]: I1216 06:15:38.927765    1283 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hrwjn\" (UniqueName: \"kubernetes.io/projected/64c1b9dc-0912-4695-af06-674137988f39-kube-api-access-hrwjn\") on node \"addons-142606\" DevicePath \"\""
	Dec 16 06:15:39 addons-142606 kubelet[1283]: I1216 06:15:39.699118    1283 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee60df9288f3b6454d85789eed74993e74b2869fb95379a94ecfc6af77c1d478"
	Dec 16 06:15:41 addons-142606 kubelet[1283]: I1216 06:15:41.929612    1283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-cvfrs" podStartSLOduration=66.328046798 podStartE2EDuration="1m21.929592429s" podCreationTimestamp="2025-12-16 06:14:20 +0000 UTC" firstStartedPulling="2025-12-16 06:15:25.056172265 +0000 UTC m=+79.132864765" lastFinishedPulling="2025-12-16 06:15:40.657717888 +0000 UTC m=+94.734410396" observedRunningTime="2025-12-16 06:15:41.752450601 +0000 UTC m=+95.829143232" watchObservedRunningTime="2025-12-16 06:15:41.929592429 +0000 UTC m=+96.006285028"
	Dec 16 06:15:44 addons-142606 kubelet[1283]: I1216 06:15:44.283749    1283 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Dec 16 06:15:44 addons-142606 kubelet[1283]: I1216 06:15:44.283800    1283 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Dec 16 06:15:56 addons-142606 kubelet[1283]: I1216 06:15:56.040511    1283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-ds9r9" podStartSLOduration=10.90375869 podStartE2EDuration="1m4.040459994s" podCreationTimestamp="2025-12-16 06:14:52 +0000 UTC" firstStartedPulling="2025-12-16 06:14:54.043826113 +0000 UTC m=+48.120518613" lastFinishedPulling="2025-12-16 06:15:47.180527418 +0000 UTC m=+101.257219917" observedRunningTime="2025-12-16 06:15:47.782981878 +0000 UTC m=+101.859674386" watchObservedRunningTime="2025-12-16 06:15:56.040459994 +0000 UTC m=+110.117152503"
	Dec 16 06:15:56 addons-142606 kubelet[1283]: I1216 06:15:56.070844    1283 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="130f45b9-9375-4057-9f95-41e56a75587d" path="/var/lib/kubelet/pods/130f45b9-9375-4057-9f95-41e56a75587d/volumes"
	Dec 16 06:15:56 addons-142606 kubelet[1283]: E1216 06:15:56.897851    1283 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Dec 16 06:15:56 addons-142606 kubelet[1283]: E1216 06:15:56.897940    1283 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42db279c-e8af-4665-a64c-91e4804c2b00-gcr-creds podName:42db279c-e8af-4665-a64c-91e4804c2b00 nodeName:}" failed. No retries permitted until 2025-12-16 06:17:00.89791992 +0000 UTC m=+174.974612420 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/42db279c-e8af-4665-a64c-91e4804c2b00-gcr-creds") pod "registry-creds-764b6fb674-8vxwt" (UID: "42db279c-e8af-4665-a64c-91e4804c2b00") : secret "registry-creds-gcr" not found
	Dec 16 06:16:06 addons-142606 kubelet[1283]: I1216 06:16:06.092654    1283 scope.go:117] "RemoveContainer" containerID="5a0f9adb9ac47f31909c69ca55a45ca8a6eb378b208efdf49831947325d65840"
	Dec 16 06:16:06 addons-142606 kubelet[1283]: I1216 06:16:06.677000    1283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-85d4c799dd-lswbn" podStartSLOduration=103.355329458 podStartE2EDuration="1m49.676982464s" podCreationTimestamp="2025-12-16 06:14:17 +0000 UTC" firstStartedPulling="2025-12-16 06:15:57.41318921 +0000 UTC m=+111.489881710" lastFinishedPulling="2025-12-16 06:16:03.734842217 +0000 UTC m=+117.811534716" observedRunningTime="2025-12-16 06:16:03.854175938 +0000 UTC m=+117.930868437" watchObservedRunningTime="2025-12-16 06:16:06.676982464 +0000 UTC m=+120.753674963"
	Dec 16 06:16:06 addons-142606 kubelet[1283]: I1216 06:16:06.810355    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgdhg\" (UniqueName: \"kubernetes.io/projected/16a10a01-65c5-40c7-b5ec-ed523d74b116-kube-api-access-zgdhg\") pod \"busybox\" (UID: \"16a10a01-65c5-40c7-b5ec-ed523d74b116\") " pod="default/busybox"
	Dec 16 06:16:06 addons-142606 kubelet[1283]: I1216 06:16:06.810576    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/16a10a01-65c5-40c7-b5ec-ed523d74b116-gcp-creds\") pod \"busybox\" (UID: \"16a10a01-65c5-40c7-b5ec-ed523d74b116\") " pod="default/busybox"
	Dec 16 06:16:07 addons-142606 kubelet[1283]: W1216 06:16:07.018556    1283 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/bf001fc7b73962d265e364d5dc0a0431f53d593dbeb39b7f63abe2349353c726/crio-adc631c8ab7367060af9a588cfb0e25a7bcbd12d8c8aebead05b571cd0b9eb25 WatchSource:0}: Error finding container adc631c8ab7367060af9a588cfb0e25a7bcbd12d8c8aebead05b571cd0b9eb25: Status 404 returned error can't find the container with id adc631c8ab7367060af9a588cfb0e25a7bcbd12d8c8aebead05b571cd0b9eb25
	Dec 16 06:16:10 addons-142606 kubelet[1283]: I1216 06:16:10.070063    1283 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64c1b9dc-0912-4695-af06-674137988f39" path="/var/lib/kubelet/pods/64c1b9dc-0912-4695-af06-674137988f39/volumes"
	Dec 16 06:16:14 addons-142606 kubelet[1283]: I1216 06:16:14.843688    1283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=6.841788146 podStartE2EDuration="8.843668676s" podCreationTimestamp="2025-12-16 06:16:06 +0000 UTC" firstStartedPulling="2025-12-16 06:16:07.020330131 +0000 UTC m=+121.097022631" lastFinishedPulling="2025-12-16 06:16:09.022210661 +0000 UTC m=+123.098903161" observedRunningTime="2025-12-16 06:16:09.86131468 +0000 UTC m=+123.938007188" watchObservedRunningTime="2025-12-16 06:16:14.843668676 +0000 UTC m=+128.920361175"
	
	
	==> storage-provisioner [05deafa86a4775c771a7a4f91648e7d0bbbde3e86afb8ded084631f76dadb3ea] <==
	W1216 06:15:54.110766       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:15:56.114500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:15:56.121807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:15:58.125656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:15:58.133791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:16:00.160124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:16:00.263486       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:16:02.266871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:16:02.274082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:16:04.305071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:16:04.313492       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:16:06.316988       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:16:06.329628       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:16:08.332416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:16:08.337636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:16:10.340782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:16:10.345514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:16:12.349752       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:16:12.354664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:16:14.358134       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:16:14.363131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:16:16.366233       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:16:16.372902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:16:18.382545       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 06:16:18.392532       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-142606 -n addons-142606
helpers_test.go:270: (dbg) Run:  kubectl --context addons-142606 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-n8gg2 ingress-nginx-admission-patch-2jxxc registry-creds-764b6fb674-8vxwt
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-142606 describe pod ingress-nginx-admission-create-n8gg2 ingress-nginx-admission-patch-2jxxc registry-creds-764b6fb674-8vxwt
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-142606 describe pod ingress-nginx-admission-create-n8gg2 ingress-nginx-admission-patch-2jxxc registry-creds-764b6fb674-8vxwt: exit status 1 (87.504423ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-n8gg2" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-2jxxc" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-8vxwt" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-142606 describe pod ingress-nginx-admission-create-n8gg2 ingress-nginx-admission-patch-2jxxc registry-creds-764b6fb674-8vxwt: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-142606 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-142606 addons disable headlamp --alsologtostderr -v=1: exit status 11 (283.008498ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 06:16:20.602181 1606898 out.go:360] Setting OutFile to fd 1 ...
	I1216 06:16:20.603553 1606898 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:16:20.603609 1606898 out.go:374] Setting ErrFile to fd 2...
	I1216 06:16:20.603632 1606898 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:16:20.603937 1606898 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 06:16:20.604283 1606898 mustload.go:66] Loading cluster: addons-142606
	I1216 06:16:20.604816 1606898 config.go:182] Loaded profile config "addons-142606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 06:16:20.604875 1606898 addons.go:622] checking whether the cluster is paused
	I1216 06:16:20.605026 1606898 config.go:182] Loaded profile config "addons-142606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 06:16:20.605063 1606898 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:16:20.605621 1606898 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:16:20.630894 1606898 ssh_runner.go:195] Run: systemctl --version
	I1216 06:16:20.630945 1606898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:16:20.655102 1606898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:16:20.754944 1606898 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 06:16:20.755032 1606898 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 06:16:20.795606 1606898 cri.go:89] found id: "6703e84bcca40b0a594cdf475f863c47388bb3473689dc6dc9131665ff15c722"
	I1216 06:16:20.795628 1606898 cri.go:89] found id: "88067168cfc835ec13a085c3301fa7a0945be84d1b32a45d6272d448705a93b4"
	I1216 06:16:20.795633 1606898 cri.go:89] found id: "6731eaf9efe44a3841c9077eb710e80717cb11a2633c9eb508ffa19f6164b80b"
	I1216 06:16:20.795637 1606898 cri.go:89] found id: "0fee244cfec70855196eca6cad232f4f73eacca15c942ffd069d11069d1f4cb4"
	I1216 06:16:20.795641 1606898 cri.go:89] found id: "28c54e5bde7563267d05bd0f6de8f8354c40960cc47cbfeddccee412b7fe46cb"
	I1216 06:16:20.795645 1606898 cri.go:89] found id: "165110b3c17520e8c6e3174b457d4b772ec69f7de1b240977205602417ac9de3"
	I1216 06:16:20.795648 1606898 cri.go:89] found id: "c5f817c74f04cb31c27fa9bc66b75d3c6e1e311d53f849b092257a1349eaad01"
	I1216 06:16:20.795654 1606898 cri.go:89] found id: "0d582f614e063962306cbbdc21f8d638fb8050f967014d21a8f487be01601d41"
	I1216 06:16:20.795658 1606898 cri.go:89] found id: "7818fd4ffad1eb8a31a4b1ce98a30ce679666d9ced58f83c7d6b7fee8bc7af95"
	I1216 06:16:20.795664 1606898 cri.go:89] found id: "a433ea848c0b6c9e22d66f89f96c00d59764c3b355eeeff67fb834e499878a37"
	I1216 06:16:20.795668 1606898 cri.go:89] found id: "161c43bb0c1f09a93b4ab4d3dc787f8c4cc654b82a0e5d755a7b156616025ca2"
	I1216 06:16:20.795671 1606898 cri.go:89] found id: "8abe529e41335b3b05795d10efae252a394efe05ef733112fb2247a1e4fd1c92"
	I1216 06:16:20.795674 1606898 cri.go:89] found id: "a9c90654843486bc32f90e8f985ddbacf205a101da2c3dd5ca31b342bfe712a4"
	I1216 06:16:20.795677 1606898 cri.go:89] found id: "c26c874f20adaf4ff3f5bebc23a86ae73f7dd3cecb90066667bc3a5d3a0c6c19"
	I1216 06:16:20.795681 1606898 cri.go:89] found id: "ce1c26a19229b31cbab4c845e9e6724afb7c05777e5212c27130f447ee267f04"
	I1216 06:16:20.795691 1606898 cri.go:89] found id: "05deafa86a4775c771a7a4f91648e7d0bbbde3e86afb8ded084631f76dadb3ea"
	I1216 06:16:20.795694 1606898 cri.go:89] found id: "3ba24e9ad28c6c3240f0dfc5f6682f61f94d490a15253cca8ed8af56ecef50b8"
	I1216 06:16:20.795698 1606898 cri.go:89] found id: "420ec82bb10934672e188d3b5c75b4015e4ddff1993a56b334897407409b4e9b"
	I1216 06:16:20.795701 1606898 cri.go:89] found id: "200b85d246fd05ce6515e2957bed839b1cd509fbd6973e7fb7b76cfc92dc0e92"
	I1216 06:16:20.795703 1606898 cri.go:89] found id: "df77467f393ab9f56a05b6bda0282ec85b78e7479554e9a66b909f66844386c1"
	I1216 06:16:20.795708 1606898 cri.go:89] found id: "579811cebcc8368d345e24f90d842a2c3691b61c760bea541d93287864a6257a"
	I1216 06:16:20.795714 1606898 cri.go:89] found id: "f245307e594fbb88a44a0deec519111b1a88c9ff3bfc81884eb0fff4916d96b2"
	I1216 06:16:20.795718 1606898 cri.go:89] found id: "c9eb26e694306fb2badad1b156e8c43cd7669aeea899bdaf4f5005d8c36ce56e"
	I1216 06:16:20.795721 1606898 cri.go:89] found id: ""
	I1216 06:16:20.795781 1606898 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 06:16:20.812328 1606898 out.go:203] 
	W1216 06:16:20.815238 1606898 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T06:16:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T06:16:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 06:16:20.815270 1606898 out.go:285] * 
	* 
	W1216 06:16:20.822538 1606898 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 06:16:20.826523 1606898 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-142606 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.23s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.32s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-fdxwf" [3757a58a-e762-4644-8481-3dfebab55cd1] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004237819s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-142606 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-142606 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (304.607712ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 06:16:38.650022 1607332 out.go:360] Setting OutFile to fd 1 ...
	I1216 06:16:38.650703 1607332 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:16:38.650718 1607332 out.go:374] Setting ErrFile to fd 2...
	I1216 06:16:38.650724 1607332 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:16:38.650997 1607332 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 06:16:38.651301 1607332 mustload.go:66] Loading cluster: addons-142606
	I1216 06:16:38.651685 1607332 config.go:182] Loaded profile config "addons-142606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 06:16:38.651705 1607332 addons.go:622] checking whether the cluster is paused
	I1216 06:16:38.651817 1607332 config.go:182] Loaded profile config "addons-142606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 06:16:38.651833 1607332 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:16:38.652330 1607332 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:16:38.671063 1607332 ssh_runner.go:195] Run: systemctl --version
	I1216 06:16:38.671127 1607332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:16:38.691100 1607332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:16:38.787184 1607332 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 06:16:38.787291 1607332 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 06:16:38.821432 1607332 cri.go:89] found id: "6703e84bcca40b0a594cdf475f863c47388bb3473689dc6dc9131665ff15c722"
	I1216 06:16:38.821450 1607332 cri.go:89] found id: "88067168cfc835ec13a085c3301fa7a0945be84d1b32a45d6272d448705a93b4"
	I1216 06:16:38.821455 1607332 cri.go:89] found id: "6731eaf9efe44a3841c9077eb710e80717cb11a2633c9eb508ffa19f6164b80b"
	I1216 06:16:38.821459 1607332 cri.go:89] found id: "0fee244cfec70855196eca6cad232f4f73eacca15c942ffd069d11069d1f4cb4"
	I1216 06:16:38.821462 1607332 cri.go:89] found id: "28c54e5bde7563267d05bd0f6de8f8354c40960cc47cbfeddccee412b7fe46cb"
	I1216 06:16:38.821471 1607332 cri.go:89] found id: "165110b3c17520e8c6e3174b457d4b772ec69f7de1b240977205602417ac9de3"
	I1216 06:16:38.821475 1607332 cri.go:89] found id: "c5f817c74f04cb31c27fa9bc66b75d3c6e1e311d53f849b092257a1349eaad01"
	I1216 06:16:38.821479 1607332 cri.go:89] found id: "0d582f614e063962306cbbdc21f8d638fb8050f967014d21a8f487be01601d41"
	I1216 06:16:38.821482 1607332 cri.go:89] found id: "7818fd4ffad1eb8a31a4b1ce98a30ce679666d9ced58f83c7d6b7fee8bc7af95"
	I1216 06:16:38.821493 1607332 cri.go:89] found id: "a433ea848c0b6c9e22d66f89f96c00d59764c3b355eeeff67fb834e499878a37"
	I1216 06:16:38.821497 1607332 cri.go:89] found id: "161c43bb0c1f09a93b4ab4d3dc787f8c4cc654b82a0e5d755a7b156616025ca2"
	I1216 06:16:38.821500 1607332 cri.go:89] found id: "8abe529e41335b3b05795d10efae252a394efe05ef733112fb2247a1e4fd1c92"
	I1216 06:16:38.821503 1607332 cri.go:89] found id: "a9c90654843486bc32f90e8f985ddbacf205a101da2c3dd5ca31b342bfe712a4"
	I1216 06:16:38.821506 1607332 cri.go:89] found id: "c26c874f20adaf4ff3f5bebc23a86ae73f7dd3cecb90066667bc3a5d3a0c6c19"
	I1216 06:16:38.821509 1607332 cri.go:89] found id: "ce1c26a19229b31cbab4c845e9e6724afb7c05777e5212c27130f447ee267f04"
	I1216 06:16:38.821517 1607332 cri.go:89] found id: "05deafa86a4775c771a7a4f91648e7d0bbbde3e86afb8ded084631f76dadb3ea"
	I1216 06:16:38.821521 1607332 cri.go:89] found id: "3ba24e9ad28c6c3240f0dfc5f6682f61f94d490a15253cca8ed8af56ecef50b8"
	I1216 06:16:38.821526 1607332 cri.go:89] found id: "420ec82bb10934672e188d3b5c75b4015e4ddff1993a56b334897407409b4e9b"
	I1216 06:16:38.821529 1607332 cri.go:89] found id: "200b85d246fd05ce6515e2957bed839b1cd509fbd6973e7fb7b76cfc92dc0e92"
	I1216 06:16:38.821532 1607332 cri.go:89] found id: "df77467f393ab9f56a05b6bda0282ec85b78e7479554e9a66b909f66844386c1"
	I1216 06:16:38.821537 1607332 cri.go:89] found id: "579811cebcc8368d345e24f90d842a2c3691b61c760bea541d93287864a6257a"
	I1216 06:16:38.821540 1607332 cri.go:89] found id: "f245307e594fbb88a44a0deec519111b1a88c9ff3bfc81884eb0fff4916d96b2"
	I1216 06:16:38.821543 1607332 cri.go:89] found id: "c9eb26e694306fb2badad1b156e8c43cd7669aeea899bdaf4f5005d8c36ce56e"
	I1216 06:16:38.821546 1607332 cri.go:89] found id: ""
	I1216 06:16:38.821598 1607332 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 06:16:38.864076 1607332 out.go:203] 
	W1216 06:16:38.867807 1607332 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T06:16:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T06:16:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 06:16:38.867835 1607332 out.go:285] * 
	* 
	W1216 06:16:38.879575 1607332 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 06:16:38.886592 1607332 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-142606 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.32s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.5s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-142606 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-142606 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-142606 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-142606 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-142606 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-142606 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-142606 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-142606 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [fe93c33b-a454-47c8-a8cb-440f741f8185] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [fe93c33b-a454-47c8-a8cb-440f741f8185] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [fe93c33b-a454-47c8-a8cb-440f741f8185] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003957979s
addons_test.go:969: (dbg) Run:  kubectl --context addons-142606 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-142606 ssh "cat /opt/local-path-provisioner/pvc-a3d2f4c9-1b88-4e22-b605-5f5f6ef7354e_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-142606 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-142606 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-142606 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-142606 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (294.100845ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 06:16:42.720815 1607536 out.go:360] Setting OutFile to fd 1 ...
	I1216 06:16:42.721671 1607536 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:16:42.721708 1607536 out.go:374] Setting ErrFile to fd 2...
	I1216 06:16:42.721729 1607536 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:16:42.722019 1607536 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 06:16:42.722358 1607536 mustload.go:66] Loading cluster: addons-142606
	I1216 06:16:42.722833 1607536 config.go:182] Loaded profile config "addons-142606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 06:16:42.722872 1607536 addons.go:622] checking whether the cluster is paused
	I1216 06:16:42.723025 1607536 config.go:182] Loaded profile config "addons-142606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 06:16:42.723057 1607536 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:16:42.723594 1607536 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:16:42.741869 1607536 ssh_runner.go:195] Run: systemctl --version
	I1216 06:16:42.741928 1607536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:16:42.760770 1607536 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:16:42.863637 1607536 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 06:16:42.863774 1607536 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 06:16:42.891966 1607536 cri.go:89] found id: "6703e84bcca40b0a594cdf475f863c47388bb3473689dc6dc9131665ff15c722"
	I1216 06:16:42.891994 1607536 cri.go:89] found id: "88067168cfc835ec13a085c3301fa7a0945be84d1b32a45d6272d448705a93b4"
	I1216 06:16:42.891999 1607536 cri.go:89] found id: "6731eaf9efe44a3841c9077eb710e80717cb11a2633c9eb508ffa19f6164b80b"
	I1216 06:16:42.892003 1607536 cri.go:89] found id: "0fee244cfec70855196eca6cad232f4f73eacca15c942ffd069d11069d1f4cb4"
	I1216 06:16:42.892006 1607536 cri.go:89] found id: "28c54e5bde7563267d05bd0f6de8f8354c40960cc47cbfeddccee412b7fe46cb"
	I1216 06:16:42.892010 1607536 cri.go:89] found id: "165110b3c17520e8c6e3174b457d4b772ec69f7de1b240977205602417ac9de3"
	I1216 06:16:42.892013 1607536 cri.go:89] found id: "c5f817c74f04cb31c27fa9bc66b75d3c6e1e311d53f849b092257a1349eaad01"
	I1216 06:16:42.892016 1607536 cri.go:89] found id: "0d582f614e063962306cbbdc21f8d638fb8050f967014d21a8f487be01601d41"
	I1216 06:16:42.892019 1607536 cri.go:89] found id: "7818fd4ffad1eb8a31a4b1ce98a30ce679666d9ced58f83c7d6b7fee8bc7af95"
	I1216 06:16:42.892025 1607536 cri.go:89] found id: "a433ea848c0b6c9e22d66f89f96c00d59764c3b355eeeff67fb834e499878a37"
	I1216 06:16:42.892028 1607536 cri.go:89] found id: "161c43bb0c1f09a93b4ab4d3dc787f8c4cc654b82a0e5d755a7b156616025ca2"
	I1216 06:16:42.892031 1607536 cri.go:89] found id: "8abe529e41335b3b05795d10efae252a394efe05ef733112fb2247a1e4fd1c92"
	I1216 06:16:42.892034 1607536 cri.go:89] found id: "a9c90654843486bc32f90e8f985ddbacf205a101da2c3dd5ca31b342bfe712a4"
	I1216 06:16:42.892037 1607536 cri.go:89] found id: "c26c874f20adaf4ff3f5bebc23a86ae73f7dd3cecb90066667bc3a5d3a0c6c19"
	I1216 06:16:42.892040 1607536 cri.go:89] found id: "ce1c26a19229b31cbab4c845e9e6724afb7c05777e5212c27130f447ee267f04"
	I1216 06:16:42.892045 1607536 cri.go:89] found id: "05deafa86a4775c771a7a4f91648e7d0bbbde3e86afb8ded084631f76dadb3ea"
	I1216 06:16:42.892050 1607536 cri.go:89] found id: "3ba24e9ad28c6c3240f0dfc5f6682f61f94d490a15253cca8ed8af56ecef50b8"
	I1216 06:16:42.892053 1607536 cri.go:89] found id: "420ec82bb10934672e188d3b5c75b4015e4ddff1993a56b334897407409b4e9b"
	I1216 06:16:42.892056 1607536 cri.go:89] found id: "200b85d246fd05ce6515e2957bed839b1cd509fbd6973e7fb7b76cfc92dc0e92"
	I1216 06:16:42.892060 1607536 cri.go:89] found id: "df77467f393ab9f56a05b6bda0282ec85b78e7479554e9a66b909f66844386c1"
	I1216 06:16:42.892064 1607536 cri.go:89] found id: "579811cebcc8368d345e24f90d842a2c3691b61c760bea541d93287864a6257a"
	I1216 06:16:42.892067 1607536 cri.go:89] found id: "f245307e594fbb88a44a0deec519111b1a88c9ff3bfc81884eb0fff4916d96b2"
	I1216 06:16:42.892070 1607536 cri.go:89] found id: "c9eb26e694306fb2badad1b156e8c43cd7669aeea899bdaf4f5005d8c36ce56e"
	I1216 06:16:42.892073 1607536 cri.go:89] found id: ""
	I1216 06:16:42.892127 1607536 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 06:16:42.920914 1607536 out.go:203] 
	W1216 06:16:42.924204 1607536 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T06:16:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T06:16:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 06:16:42.924227 1607536 out.go:285] * 
	* 
	W1216 06:16:42.931926 1607536 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 06:16:42.936299 1607536 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-142606 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (9.50s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.35s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-w4pvk" [bbe25eae-b43b-4904-bd63-a070c971d855] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.0041078s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-142606 addons disable nvidia-device-plugin --alsologtostderr -v=1
2025/12/16 06:16:33 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-142606 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (343.073641ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 06:16:33.182768 1607106 out.go:360] Setting OutFile to fd 1 ...
	I1216 06:16:33.193593 1607106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:16:33.193635 1607106 out.go:374] Setting ErrFile to fd 2...
	I1216 06:16:33.193645 1607106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:16:33.194142 1607106 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 06:16:33.194534 1607106 mustload.go:66] Loading cluster: addons-142606
	I1216 06:16:33.195230 1607106 config.go:182] Loaded profile config "addons-142606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 06:16:33.195256 1607106 addons.go:622] checking whether the cluster is paused
	I1216 06:16:33.195398 1607106 config.go:182] Loaded profile config "addons-142606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 06:16:33.195415 1607106 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:16:33.196124 1607106 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:16:33.229747 1607106 ssh_runner.go:195] Run: systemctl --version
	I1216 06:16:33.229806 1607106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:16:33.260193 1607106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:16:33.359948 1607106 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 06:16:33.360043 1607106 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 06:16:33.402684 1607106 cri.go:89] found id: "6703e84bcca40b0a594cdf475f863c47388bb3473689dc6dc9131665ff15c722"
	I1216 06:16:33.402715 1607106 cri.go:89] found id: "88067168cfc835ec13a085c3301fa7a0945be84d1b32a45d6272d448705a93b4"
	I1216 06:16:33.402722 1607106 cri.go:89] found id: "6731eaf9efe44a3841c9077eb710e80717cb11a2633c9eb508ffa19f6164b80b"
	I1216 06:16:33.402726 1607106 cri.go:89] found id: "0fee244cfec70855196eca6cad232f4f73eacca15c942ffd069d11069d1f4cb4"
	I1216 06:16:33.402729 1607106 cri.go:89] found id: "28c54e5bde7563267d05bd0f6de8f8354c40960cc47cbfeddccee412b7fe46cb"
	I1216 06:16:33.402733 1607106 cri.go:89] found id: "165110b3c17520e8c6e3174b457d4b772ec69f7de1b240977205602417ac9de3"
	I1216 06:16:33.402736 1607106 cri.go:89] found id: "c5f817c74f04cb31c27fa9bc66b75d3c6e1e311d53f849b092257a1349eaad01"
	I1216 06:16:33.402740 1607106 cri.go:89] found id: "0d582f614e063962306cbbdc21f8d638fb8050f967014d21a8f487be01601d41"
	I1216 06:16:33.402743 1607106 cri.go:89] found id: "7818fd4ffad1eb8a31a4b1ce98a30ce679666d9ced58f83c7d6b7fee8bc7af95"
	I1216 06:16:33.402750 1607106 cri.go:89] found id: "a433ea848c0b6c9e22d66f89f96c00d59764c3b355eeeff67fb834e499878a37"
	I1216 06:16:33.402754 1607106 cri.go:89] found id: "161c43bb0c1f09a93b4ab4d3dc787f8c4cc654b82a0e5d755a7b156616025ca2"
	I1216 06:16:33.402757 1607106 cri.go:89] found id: "8abe529e41335b3b05795d10efae252a394efe05ef733112fb2247a1e4fd1c92"
	I1216 06:16:33.402764 1607106 cri.go:89] found id: "a9c90654843486bc32f90e8f985ddbacf205a101da2c3dd5ca31b342bfe712a4"
	I1216 06:16:33.402768 1607106 cri.go:89] found id: "c26c874f20adaf4ff3f5bebc23a86ae73f7dd3cecb90066667bc3a5d3a0c6c19"
	I1216 06:16:33.402771 1607106 cri.go:89] found id: "ce1c26a19229b31cbab4c845e9e6724afb7c05777e5212c27130f447ee267f04"
	I1216 06:16:33.402778 1607106 cri.go:89] found id: "05deafa86a4775c771a7a4f91648e7d0bbbde3e86afb8ded084631f76dadb3ea"
	I1216 06:16:33.402784 1607106 cri.go:89] found id: "3ba24e9ad28c6c3240f0dfc5f6682f61f94d490a15253cca8ed8af56ecef50b8"
	I1216 06:16:33.402788 1607106 cri.go:89] found id: "420ec82bb10934672e188d3b5c75b4015e4ddff1993a56b334897407409b4e9b"
	I1216 06:16:33.402792 1607106 cri.go:89] found id: "200b85d246fd05ce6515e2957bed839b1cd509fbd6973e7fb7b76cfc92dc0e92"
	I1216 06:16:33.402795 1607106 cri.go:89] found id: "df77467f393ab9f56a05b6bda0282ec85b78e7479554e9a66b909f66844386c1"
	I1216 06:16:33.402799 1607106 cri.go:89] found id: "579811cebcc8368d345e24f90d842a2c3691b61c760bea541d93287864a6257a"
	I1216 06:16:33.402806 1607106 cri.go:89] found id: "f245307e594fbb88a44a0deec519111b1a88c9ff3bfc81884eb0fff4916d96b2"
	I1216 06:16:33.402809 1607106 cri.go:89] found id: "c9eb26e694306fb2badad1b156e8c43cd7669aeea899bdaf4f5005d8c36ce56e"
	I1216 06:16:33.402813 1607106 cri.go:89] found id: ""
	I1216 06:16:33.402873 1607106 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 06:16:33.420568 1607106 out.go:203] 
	W1216 06:16:33.423624 1607106 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T06:16:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T06:16:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 06:16:33.423655 1607106 out.go:285] * 
	* 
	W1216 06:16:33.430947 1607106 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 06:16:33.434471 1607106 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-142606 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.35s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-g8xrb" [7796a57f-2a2f-4ad9-afdc-7500bf6b3519] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003399628s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-142606 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-142606 addons disable yakd --alsologtostderr -v=1: exit status 11 (255.012996ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 06:16:26.889542 1606972 out.go:360] Setting OutFile to fd 1 ...
	I1216 06:16:26.890311 1606972 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:16:26.890327 1606972 out.go:374] Setting ErrFile to fd 2...
	I1216 06:16:26.890333 1606972 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:16:26.890634 1606972 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 06:16:26.891014 1606972 mustload.go:66] Loading cluster: addons-142606
	I1216 06:16:26.891441 1606972 config.go:182] Loaded profile config "addons-142606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 06:16:26.891463 1606972 addons.go:622] checking whether the cluster is paused
	I1216 06:16:26.891633 1606972 config.go:182] Loaded profile config "addons-142606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 06:16:26.891653 1606972 host.go:66] Checking if "addons-142606" exists ...
	I1216 06:16:26.892220 1606972 cli_runner.go:164] Run: docker container inspect addons-142606 --format={{.State.Status}}
	I1216 06:16:26.911793 1606972 ssh_runner.go:195] Run: systemctl --version
	I1216 06:16:26.911852 1606972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142606
	I1216 06:16:26.929572 1606972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/addons-142606/id_rsa Username:docker}
	I1216 06:16:27.027399 1606972 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 06:16:27.027518 1606972 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 06:16:27.058046 1606972 cri.go:89] found id: "6703e84bcca40b0a594cdf475f863c47388bb3473689dc6dc9131665ff15c722"
	I1216 06:16:27.058074 1606972 cri.go:89] found id: "88067168cfc835ec13a085c3301fa7a0945be84d1b32a45d6272d448705a93b4"
	I1216 06:16:27.058079 1606972 cri.go:89] found id: "6731eaf9efe44a3841c9077eb710e80717cb11a2633c9eb508ffa19f6164b80b"
	I1216 06:16:27.058082 1606972 cri.go:89] found id: "0fee244cfec70855196eca6cad232f4f73eacca15c942ffd069d11069d1f4cb4"
	I1216 06:16:27.058086 1606972 cri.go:89] found id: "28c54e5bde7563267d05bd0f6de8f8354c40960cc47cbfeddccee412b7fe46cb"
	I1216 06:16:27.058090 1606972 cri.go:89] found id: "165110b3c17520e8c6e3174b457d4b772ec69f7de1b240977205602417ac9de3"
	I1216 06:16:27.058096 1606972 cri.go:89] found id: "c5f817c74f04cb31c27fa9bc66b75d3c6e1e311d53f849b092257a1349eaad01"
	I1216 06:16:27.058099 1606972 cri.go:89] found id: "0d582f614e063962306cbbdc21f8d638fb8050f967014d21a8f487be01601d41"
	I1216 06:16:27.058103 1606972 cri.go:89] found id: "7818fd4ffad1eb8a31a4b1ce98a30ce679666d9ced58f83c7d6b7fee8bc7af95"
	I1216 06:16:27.058108 1606972 cri.go:89] found id: "a433ea848c0b6c9e22d66f89f96c00d59764c3b355eeeff67fb834e499878a37"
	I1216 06:16:27.058112 1606972 cri.go:89] found id: "161c43bb0c1f09a93b4ab4d3dc787f8c4cc654b82a0e5d755a7b156616025ca2"
	I1216 06:16:27.058115 1606972 cri.go:89] found id: "8abe529e41335b3b05795d10efae252a394efe05ef733112fb2247a1e4fd1c92"
	I1216 06:16:27.058119 1606972 cri.go:89] found id: "a9c90654843486bc32f90e8f985ddbacf205a101da2c3dd5ca31b342bfe712a4"
	I1216 06:16:27.058122 1606972 cri.go:89] found id: "c26c874f20adaf4ff3f5bebc23a86ae73f7dd3cecb90066667bc3a5d3a0c6c19"
	I1216 06:16:27.058125 1606972 cri.go:89] found id: "ce1c26a19229b31cbab4c845e9e6724afb7c05777e5212c27130f447ee267f04"
	I1216 06:16:27.058131 1606972 cri.go:89] found id: "05deafa86a4775c771a7a4f91648e7d0bbbde3e86afb8ded084631f76dadb3ea"
	I1216 06:16:27.058137 1606972 cri.go:89] found id: "3ba24e9ad28c6c3240f0dfc5f6682f61f94d490a15253cca8ed8af56ecef50b8"
	I1216 06:16:27.058141 1606972 cri.go:89] found id: "420ec82bb10934672e188d3b5c75b4015e4ddff1993a56b334897407409b4e9b"
	I1216 06:16:27.058144 1606972 cri.go:89] found id: "200b85d246fd05ce6515e2957bed839b1cd509fbd6973e7fb7b76cfc92dc0e92"
	I1216 06:16:27.058147 1606972 cri.go:89] found id: "df77467f393ab9f56a05b6bda0282ec85b78e7479554e9a66b909f66844386c1"
	I1216 06:16:27.058151 1606972 cri.go:89] found id: "579811cebcc8368d345e24f90d842a2c3691b61c760bea541d93287864a6257a"
	I1216 06:16:27.058154 1606972 cri.go:89] found id: "f245307e594fbb88a44a0deec519111b1a88c9ff3bfc81884eb0fff4916d96b2"
	I1216 06:16:27.058158 1606972 cri.go:89] found id: "c9eb26e694306fb2badad1b156e8c43cd7669aeea899bdaf4f5005d8c36ce56e"
	I1216 06:16:27.058169 1606972 cri.go:89] found id: ""
	I1216 06:16:27.058233 1606972 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 06:16:27.073226 1606972 out.go:203] 
	W1216 06:16:27.076269 1606972 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T06:16:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T06:16:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 06:16:27.076290 1606972 out.go:285] * 
	* 
	W1216 06:16:27.083486 1606972 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 06:16:27.086345 1606972 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-142606 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (501.88s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-364120 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1216 06:26:06.820662 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:26:34.528664 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:28:08.326567 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-487532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:28:08.332986 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-487532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:28:08.344396 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-487532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:28:08.365813 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-487532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:28:08.407238 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-487532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:28:08.488799 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-487532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:28:08.650407 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-487532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:28:08.972087 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-487532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:28:09.614324 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-487532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:28:10.895980 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-487532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:28:13.457434 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-487532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:28:18.579836 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-487532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:28:28.822144 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-487532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:28:49.303582 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-487532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:29:30.266691 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-487532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:30:52.191382 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-487532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:31:06.820690 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-364120 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m20.436734007s)

                                                
                                                
-- stdout --
	* [functional-364120] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22141
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-364120" primary control-plane node in "functional-364120" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Found network options:
	  - HTTP_PROXY=localhost:42709
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:42709 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-364120 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-364120 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000288025s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000340218s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000340218s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-364120 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-364120
helpers_test.go:244: (dbg) docker inspect functional-364120:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf",
	        "Created": "2025-12-16T06:24:05.281524036Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1628059,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T06:24:05.346294886Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/hostname",
	        "HostsPath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/hosts",
	        "LogPath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf-json.log",
	        "Name": "/functional-364120",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-364120:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-364120",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf",
	                "LowerDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752-init/diff:/var/lib/docker/overlay2/bf9e5e3f04a34ae52d17b5e81aeacb3854428b2bda7b4fcb7e1d86558db759ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752/merged",
	                "UpperDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752/diff",
	                "WorkDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-364120",
	                "Source": "/var/lib/docker/volumes/functional-364120/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-364120",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-364120",
	                "name.minikube.sigs.k8s.io": "functional-364120",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ca8e444af5ea4dc220aae407b23205e89ee2c7bfaf0d7da28c0fa8a6e9438a0b",
	            "SandboxKey": "/var/run/docker/netns/ca8e444af5ea",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34260"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34261"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34264"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34262"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34263"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-364120": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:28:ec:c3:f0:f5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a6847428577f52c75d7f6ab7a92b3395c1204da1608971d5af98d3898a2210da",
	                    "EndpointID": "e579fd8a0ba117da836073d37b7f617933568bedfc3fb52e056b4772aaddecbf",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-364120",
	                        "8e0dcfb5d015"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-364120 -n functional-364120
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-364120 -n functional-364120: exit status 6 (295.859812ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 06:32:20.766744 1633361 status.go:458] kubeconfig endpoint: get endpoint: "functional-364120" does not appear in /home/jenkins/minikube-integration/22141-1596013/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-487532 image save kicbase/echo-server:functional-487532 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh            │ functional-487532 ssh sudo cat /etc/ssl/certs/1599255.pem                                                                                                 │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image          │ functional-487532 image rm kicbase/echo-server:functional-487532 --alsologtostderr                                                                        │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh            │ functional-487532 ssh sudo cat /usr/share/ca-certificates/1599255.pem                                                                                     │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image          │ functional-487532 image ls                                                                                                                                │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image          │ functional-487532 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh            │ functional-487532 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh            │ functional-487532 ssh sudo cat /etc/ssl/certs/15992552.pem                                                                                                │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image          │ functional-487532 image ls                                                                                                                                │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh            │ functional-487532 ssh sudo cat /usr/share/ca-certificates/15992552.pem                                                                                    │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image          │ functional-487532 image save --daemon kicbase/echo-server:functional-487532 --alsologtostderr                                                             │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh            │ functional-487532 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh            │ functional-487532 ssh sudo cat /etc/test/nested/copy/1599255/hosts                                                                                        │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image          │ functional-487532 image ls --format short --alsologtostderr                                                                                               │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image          │ functional-487532 image ls --format yaml --alsologtostderr                                                                                                │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh            │ functional-487532 ssh pgrep buildkitd                                                                                                                     │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │                     │
	│ image          │ functional-487532 image build -t localhost/my-image:functional-487532 testdata/build --alsologtostderr                                                    │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image          │ functional-487532 image ls --format json --alsologtostderr                                                                                                │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image          │ functional-487532 image ls --format table --alsologtostderr                                                                                               │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ update-context │ functional-487532 update-context --alsologtostderr -v=2                                                                                                   │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ update-context │ functional-487532 update-context --alsologtostderr -v=2                                                                                                   │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ update-context │ functional-487532 update-context --alsologtostderr -v=2                                                                                                   │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image          │ functional-487532 image ls                                                                                                                                │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ delete         │ -p functional-487532                                                                                                                                      │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:24 UTC │
	│ start          │ -p functional-364120 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0         │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:24 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 06:24:00
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 06:24:00.224790 1627667 out.go:360] Setting OutFile to fd 1 ...
	I1216 06:24:00.224958 1627667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:24:00.224962 1627667 out.go:374] Setting ErrFile to fd 2...
	I1216 06:24:00.224965 1627667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:24:00.225306 1627667 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 06:24:00.225774 1627667 out.go:368] Setting JSON to false
	I1216 06:24:00.226732 1627667 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":32792,"bootTime":1765833449,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1216 06:24:00.226857 1627667 start.go:143] virtualization:  
	I1216 06:24:00.234709 1627667 out.go:179] * [functional-364120] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 06:24:00.239195 1627667 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 06:24:00.239389 1627667 notify.go:221] Checking for updates...
	I1216 06:24:00.249590 1627667 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 06:24:00.255840 1627667 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:24:00.261781 1627667 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube
	I1216 06:24:00.268434 1627667 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 06:24:00.272745 1627667 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 06:24:00.277091 1627667 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 06:24:00.351851 1627667 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 06:24:00.352034 1627667 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:24:00.465127 1627667 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-16 06:24:00.448596524 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 06:24:00.465235 1627667 docker.go:319] overlay module found
	I1216 06:24:00.471533 1627667 out.go:179] * Using the docker driver based on user configuration
	I1216 06:24:00.483971 1627667 start.go:309] selected driver: docker
	I1216 06:24:00.483985 1627667 start.go:927] validating driver "docker" against <nil>
	I1216 06:24:00.484000 1627667 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 06:24:00.485583 1627667 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:24:00.554615 1627667 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-16 06:24:00.542326508 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 06:24:00.554777 1627667 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 06:24:00.555033 1627667 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 06:24:00.558113 1627667 out.go:179] * Using Docker driver with root privileges
	I1216 06:24:00.561141 1627667 cni.go:84] Creating CNI manager for ""
	I1216 06:24:00.561211 1627667 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 06:24:00.561219 1627667 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 06:24:00.561313 1627667 start.go:353] cluster config:
	{Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:24:00.564720 1627667 out.go:179] * Starting "functional-364120" primary control-plane node in "functional-364120" cluster
	I1216 06:24:00.567955 1627667 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 06:24:00.571157 1627667 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 06:24:00.574302 1627667 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 06:24:00.574323 1627667 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 06:24:00.574369 1627667 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1216 06:24:00.574378 1627667 cache.go:65] Caching tarball of preloaded images
	I1216 06:24:00.574469 1627667 preload.go:238] Found /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 06:24:00.574479 1627667 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1216 06:24:00.574837 1627667 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/config.json ...
	I1216 06:24:00.574869 1627667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/config.json: {Name:mkbe65568d8c0968e538e12823e70ddd937c24b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:24:00.597286 1627667 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 06:24:00.597300 1627667 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 06:24:00.597314 1627667 cache.go:243] Successfully downloaded all kic artifacts
	I1216 06:24:00.597348 1627667 start.go:360] acquireMachinesLock for functional-364120: {Name:mkbf042218fd4d1baa11f8b1e4a71170f4ad9912 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:24:00.597459 1627667 start.go:364] duration metric: took 96.007µs to acquireMachinesLock for "functional-364120"
	I1216 06:24:00.597484 1627667 start.go:93] Provisioning new machine with config: &{Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 06:24:00.597549 1627667 start.go:125] createHost starting for "" (driver="docker")
	I1216 06:24:00.601099 1627667 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1216 06:24:00.601441 1627667 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:42709 to docker env.
	I1216 06:24:00.601534 1627667 start.go:159] libmachine.API.Create for "functional-364120" (driver="docker")
	I1216 06:24:00.601562 1627667 client.go:173] LocalClient.Create starting
	I1216 06:24:00.601629 1627667 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem
	I1216 06:24:00.601662 1627667 main.go:143] libmachine: Decoding PEM data...
	I1216 06:24:00.601676 1627667 main.go:143] libmachine: Parsing certificate...
	I1216 06:24:00.601724 1627667 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem
	I1216 06:24:00.601743 1627667 main.go:143] libmachine: Decoding PEM data...
	I1216 06:24:00.601754 1627667 main.go:143] libmachine: Parsing certificate...
	I1216 06:24:00.602136 1627667 cli_runner.go:164] Run: docker network inspect functional-364120 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 06:24:00.619323 1627667 cli_runner.go:211] docker network inspect functional-364120 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 06:24:00.619414 1627667 network_create.go:284] running [docker network inspect functional-364120] to gather additional debugging logs...
	I1216 06:24:00.619429 1627667 cli_runner.go:164] Run: docker network inspect functional-364120
	W1216 06:24:00.636820 1627667 cli_runner.go:211] docker network inspect functional-364120 returned with exit code 1
	I1216 06:24:00.636841 1627667 network_create.go:287] error running [docker network inspect functional-364120]: docker network inspect functional-364120: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-364120 not found
	I1216 06:24:00.636854 1627667 network_create.go:289] output of [docker network inspect functional-364120]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-364120 not found
	
	** /stderr **
	I1216 06:24:00.636964 1627667 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 06:24:00.654198 1627667 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001949e00}
	I1216 06:24:00.654236 1627667 network_create.go:124] attempt to create docker network functional-364120 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1216 06:24:00.654300 1627667 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-364120 functional-364120
	I1216 06:24:00.715811 1627667 network_create.go:108] docker network functional-364120 192.168.49.0/24 created
	I1216 06:24:00.715833 1627667 kic.go:121] calculated static IP "192.168.49.2" for the "functional-364120" container
	I1216 06:24:00.715920 1627667 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 06:24:00.730694 1627667 cli_runner.go:164] Run: docker volume create functional-364120 --label name.minikube.sigs.k8s.io=functional-364120 --label created_by.minikube.sigs.k8s.io=true
	I1216 06:24:00.749030 1627667 oci.go:103] Successfully created a docker volume functional-364120
	I1216 06:24:00.749108 1627667 cli_runner.go:164] Run: docker run --rm --name functional-364120-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-364120 --entrypoint /usr/bin/test -v functional-364120:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1216 06:24:01.306102 1627667 oci.go:107] Successfully prepared a docker volume functional-364120
	I1216 06:24:01.306186 1627667 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 06:24:01.306194 1627667 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 06:24:01.306294 1627667 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v functional-364120:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 06:24:05.208763 1627667 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v functional-364120:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (3.902420777s)
	I1216 06:24:05.208785 1627667 kic.go:203] duration metric: took 3.902588859s to extract preloaded images to volume ...
	W1216 06:24:05.208924 1627667 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1216 06:24:05.209030 1627667 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 06:24:05.265746 1627667 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-364120 --name functional-364120 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-364120 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-364120 --network functional-364120 --ip 192.168.49.2 --volume functional-364120:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1216 06:24:05.563526 1627667 cli_runner.go:164] Run: docker container inspect functional-364120 --format={{.State.Running}}
	I1216 06:24:05.583560 1627667 cli_runner.go:164] Run: docker container inspect functional-364120 --format={{.State.Status}}
	I1216 06:24:05.605509 1627667 cli_runner.go:164] Run: docker exec functional-364120 stat /var/lib/dpkg/alternatives/iptables
	I1216 06:24:05.661150 1627667 oci.go:144] the created container "functional-364120" has a running status.
	I1216 06:24:05.661168 1627667 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa...
	I1216 06:24:05.936277 1627667 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 06:24:05.959502 1627667 cli_runner.go:164] Run: docker container inspect functional-364120 --format={{.State.Status}}
	I1216 06:24:05.979943 1627667 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 06:24:05.979954 1627667 kic_runner.go:114] Args: [docker exec --privileged functional-364120 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 06:24:06.037395 1627667 cli_runner.go:164] Run: docker container inspect functional-364120 --format={{.State.Status}}
	I1216 06:24:06.060585 1627667 machine.go:94] provisionDockerMachine start ...
	I1216 06:24:06.060691 1627667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:24:06.087145 1627667 main.go:143] libmachine: Using SSH client type: native
	I1216 06:24:06.087542 1627667 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1216 06:24:06.087646 1627667 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 06:24:06.088445 1627667 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1216 06:24:09.220020 1627667 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-364120
	
	I1216 06:24:09.220034 1627667 ubuntu.go:182] provisioning hostname "functional-364120"
	I1216 06:24:09.220098 1627667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:24:09.238293 1627667 main.go:143] libmachine: Using SSH client type: native
	I1216 06:24:09.238620 1627667 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1216 06:24:09.238629 1627667 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-364120 && echo "functional-364120" | sudo tee /etc/hostname
	I1216 06:24:09.381665 1627667 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-364120
	
	I1216 06:24:09.381736 1627667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:24:09.400638 1627667 main.go:143] libmachine: Using SSH client type: native
	I1216 06:24:09.400947 1627667 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1216 06:24:09.400964 1627667 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-364120' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-364120/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-364120' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 06:24:09.532859 1627667 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 06:24:09.532876 1627667 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-1596013/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-1596013/.minikube}
	I1216 06:24:09.532897 1627667 ubuntu.go:190] setting up certificates
	I1216 06:24:09.532913 1627667 provision.go:84] configureAuth start
	I1216 06:24:09.532974 1627667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-364120
	I1216 06:24:09.550990 1627667 provision.go:143] copyHostCerts
	I1216 06:24:09.551071 1627667 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem, removing ...
	I1216 06:24:09.551080 1627667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 06:24:09.551162 1627667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem (1078 bytes)
	I1216 06:24:09.551254 1627667 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem, removing ...
	I1216 06:24:09.551258 1627667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 06:24:09.551289 1627667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem (1123 bytes)
	I1216 06:24:09.551336 1627667 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem, removing ...
	I1216 06:24:09.551339 1627667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 06:24:09.551365 1627667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem (1675 bytes)
	I1216 06:24:09.551410 1627667 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem org=jenkins.functional-364120 san=[127.0.0.1 192.168.49.2 functional-364120 localhost minikube]
	I1216 06:24:09.677498 1627667 provision.go:177] copyRemoteCerts
	I1216 06:24:09.677561 1627667 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 06:24:09.677603 1627667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:24:09.693995 1627667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:24:09.788247 1627667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 06:24:09.806089 1627667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 06:24:09.823742 1627667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 06:24:09.841883 1627667 provision.go:87] duration metric: took 308.956264ms to configureAuth
	I1216 06:24:09.841899 1627667 ubuntu.go:206] setting minikube options for container-runtime
	I1216 06:24:09.842092 1627667 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 06:24:09.842190 1627667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:24:09.858954 1627667 main.go:143] libmachine: Using SSH client type: native
	I1216 06:24:09.859258 1627667 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1216 06:24:09.859269 1627667 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 06:24:10.172766 1627667 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 06:24:10.172779 1627667 machine.go:97] duration metric: took 4.112177783s to provisionDockerMachine
	I1216 06:24:10.172788 1627667 client.go:176] duration metric: took 9.571221836s to LocalClient.Create
	I1216 06:24:10.172801 1627667 start.go:167] duration metric: took 9.571269139s to libmachine.API.Create "functional-364120"
	I1216 06:24:10.172808 1627667 start.go:293] postStartSetup for "functional-364120" (driver="docker")
	I1216 06:24:10.172819 1627667 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 06:24:10.172908 1627667 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 06:24:10.172946 1627667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:24:10.191661 1627667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:24:10.289000 1627667 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 06:24:10.292552 1627667 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 06:24:10.292571 1627667 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 06:24:10.292591 1627667 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/addons for local assets ...
	I1216 06:24:10.292650 1627667 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/files for local assets ...
	I1216 06:24:10.292735 1627667 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> 15992552.pem in /etc/ssl/certs
	I1216 06:24:10.292818 1627667 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/test/nested/copy/1599255/hosts -> hosts in /etc/test/nested/copy/1599255
	I1216 06:24:10.292870 1627667 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1599255
	I1216 06:24:10.301560 1627667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 06:24:10.319182 1627667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/test/nested/copy/1599255/hosts --> /etc/test/nested/copy/1599255/hosts (40 bytes)
	I1216 06:24:10.336323 1627667 start.go:296] duration metric: took 163.501756ms for postStartSetup
	I1216 06:24:10.336719 1627667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-364120
	I1216 06:24:10.355449 1627667 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/config.json ...
	I1216 06:24:10.355744 1627667 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 06:24:10.355797 1627667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:24:10.372876 1627667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:24:10.465399 1627667 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 06:24:10.469864 1627667 start.go:128] duration metric: took 9.872301148s to createHost
	I1216 06:24:10.469880 1627667 start.go:83] releasing machines lock for "functional-364120", held for 9.872414593s
	I1216 06:24:10.469949 1627667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-364120
	I1216 06:24:10.490489 1627667 out.go:179] * Found network options:
	I1216 06:24:10.493434 1627667 out.go:179]   - HTTP_PROXY=localhost:42709
	W1216 06:24:10.496268 1627667 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1216 06:24:10.499461 1627667 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1216 06:24:10.502512 1627667 ssh_runner.go:195] Run: cat /version.json
	I1216 06:24:10.502557 1627667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:24:10.502576 1627667 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 06:24:10.502630 1627667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:24:10.525387 1627667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:24:10.534136 1627667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:24:10.616036 1627667 ssh_runner.go:195] Run: systemctl --version
	I1216 06:24:10.705880 1627667 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 06:24:10.741018 1627667 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 06:24:10.745430 1627667 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 06:24:10.745516 1627667 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 06:24:10.773621 1627667 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1216 06:24:10.773635 1627667 start.go:496] detecting cgroup driver to use...
	I1216 06:24:10.773668 1627667 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:24:10.773722 1627667 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 06:24:10.791492 1627667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:24:10.804753 1627667 docker.go:218] disabling cri-docker service (if available) ...
	I1216 06:24:10.804817 1627667 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 06:24:10.822887 1627667 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 06:24:10.842933 1627667 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 06:24:10.955509 1627667 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 06:24:11.085494 1627667 docker.go:234] disabling docker service ...
	I1216 06:24:11.085554 1627667 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 06:24:11.107957 1627667 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 06:24:11.122485 1627667 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 06:24:11.247161 1627667 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 06:24:11.360298 1627667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 06:24:11.373256 1627667 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 06:24:11.387234 1627667 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 06:24:11.387311 1627667 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:24:11.396298 1627667 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 06:24:11.396367 1627667 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:24:11.405386 1627667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:24:11.414600 1627667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:24:11.425987 1627667 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 06:24:11.436036 1627667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:24:11.445578 1627667 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:24:11.458880 1627667 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:24:11.467792 1627667 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 06:24:11.475305 1627667 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 06:24:11.482910 1627667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:24:11.592048 1627667 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 06:24:11.756087 1627667 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 06:24:11.756146 1627667 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 06:24:11.759975 1627667 start.go:564] Will wait 60s for crictl version
	I1216 06:24:11.760029 1627667 ssh_runner.go:195] Run: which crictl
	I1216 06:24:11.763606 1627667 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 06:24:11.794153 1627667 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 06:24:11.794230 1627667 ssh_runner.go:195] Run: crio --version
	I1216 06:24:11.823452 1627667 ssh_runner.go:195] Run: crio --version
	I1216 06:24:11.854864 1627667 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1216 06:24:11.857795 1627667 cli_runner.go:164] Run: docker network inspect functional-364120 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 06:24:11.873538 1627667 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 06:24:11.877151 1627667 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 06:24:11.886378 1627667 kubeadm.go:884] updating cluster {Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 06:24:11.886483 1627667 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 06:24:11.886531 1627667 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 06:24:11.920945 1627667 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 06:24:11.920957 1627667 crio.go:433] Images already preloaded, skipping extraction
	I1216 06:24:11.921011 1627667 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 06:24:11.948333 1627667 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 06:24:11.948350 1627667 cache_images.go:86] Images are preloaded, skipping loading
	I1216 06:24:11.948357 1627667 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1216 06:24:11.948445 1627667 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-364120 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 06:24:11.948589 1627667 ssh_runner.go:195] Run: crio config
	I1216 06:24:12.019454 1627667 cni.go:84] Creating CNI manager for ""
	I1216 06:24:12.019462 1627667 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 06:24:12.019476 1627667 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 06:24:12.019498 1627667 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-364120 NodeName:functional-364120 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 06:24:12.019625 1627667 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-364120"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 06:24:12.019718 1627667 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 06:24:12.028395 1627667 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 06:24:12.028500 1627667 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 06:24:12.036648 1627667 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1216 06:24:12.049665 1627667 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 06:24:12.062458 1627667 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1216 06:24:12.075033 1627667 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 06:24:12.078492 1627667 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 06:24:12.088051 1627667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:24:12.192202 1627667 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:24:12.212984 1627667 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120 for IP: 192.168.49.2
	I1216 06:24:12.212995 1627667 certs.go:195] generating shared ca certs ...
	I1216 06:24:12.213010 1627667 certs.go:227] acquiring lock for ca certs: {Name:mkbf72d2e438185e2867d262e148d82e5455cccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:24:12.213154 1627667 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key
	I1216 06:24:12.213195 1627667 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key
	I1216 06:24:12.213201 1627667 certs.go:257] generating profile certs ...
	I1216 06:24:12.213262 1627667 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.key
	I1216 06:24:12.213272 1627667 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.crt with IP's: []
	I1216 06:24:12.285595 1627667 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.crt ...
	I1216 06:24:12.285610 1627667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.crt: {Name:mke3ef02c1df88cd521040415b8f4f4e3022f74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:24:12.285803 1627667 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.key ...
	I1216 06:24:12.285809 1627667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.key: {Name:mk75fcec74ef3239ce26157498fa1c1f6664b0e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:24:12.285884 1627667 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.key.a6be103a
	I1216 06:24:12.285897 1627667 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.crt.a6be103a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1216 06:24:13.101258 1627667 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.crt.a6be103a ...
	I1216 06:24:13.101277 1627667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.crt.a6be103a: {Name:mk9e5da2df52e1ca3fb692a0bebcca46586b5e9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:24:13.101501 1627667 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.key.a6be103a ...
	I1216 06:24:13.101510 1627667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.key.a6be103a: {Name:mkc86087371454d37e4a77b141c4e813054f17f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:24:13.101595 1627667 certs.go:382] copying /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.crt.a6be103a -> /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.crt
	I1216 06:24:13.101677 1627667 certs.go:386] copying /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.key.a6be103a -> /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.key
	I1216 06:24:13.101732 1627667 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.key
	I1216 06:24:13.101747 1627667 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.crt with IP's: []
	I1216 06:24:13.270488 1627667 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.crt ...
	I1216 06:24:13.270504 1627667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.crt: {Name:mk6e861a6d251881cd7bc094c7f92b3897e8ffaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:24:13.270694 1627667 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.key ...
	I1216 06:24:13.270702 1627667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.key: {Name:mk130fab600dcec6b8d82c8ff04ba8803fa265b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:24:13.270889 1627667 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem (1338 bytes)
	W1216 06:24:13.270929 1627667 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255_empty.pem, impossibly tiny 0 bytes
	I1216 06:24:13.270937 1627667 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 06:24:13.270968 1627667 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem (1078 bytes)
	I1216 06:24:13.270997 1627667 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem (1123 bytes)
	I1216 06:24:13.271019 1627667 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem (1675 bytes)
	I1216 06:24:13.271061 1627667 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 06:24:13.271696 1627667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 06:24:13.290682 1627667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 06:24:13.308773 1627667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 06:24:13.326547 1627667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 06:24:13.344489 1627667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 06:24:13.361647 1627667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 06:24:13.379639 1627667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 06:24:13.396883 1627667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 06:24:13.413028 1627667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem --> /usr/share/ca-certificates/1599255.pem (1338 bytes)
	I1216 06:24:13.430300 1627667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /usr/share/ca-certificates/15992552.pem (1708 bytes)
	I1216 06:24:13.447712 1627667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 06:24:13.464652 1627667 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 06:24:13.477445 1627667 ssh_runner.go:195] Run: openssl version
	I1216 06:24:13.483756 1627667 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1599255.pem
	I1216 06:24:13.490865 1627667 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1599255.pem /etc/ssl/certs/1599255.pem
	I1216 06:24:13.498156 1627667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1599255.pem
	I1216 06:24:13.501875 1627667 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 06:24 /usr/share/ca-certificates/1599255.pem
	I1216 06:24:13.501945 1627667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1599255.pem
	I1216 06:24:13.542824 1627667 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 06:24:13.550246 1627667 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1599255.pem /etc/ssl/certs/51391683.0
	I1216 06:24:13.557588 1627667 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15992552.pem
	I1216 06:24:13.565119 1627667 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15992552.pem /etc/ssl/certs/15992552.pem
	I1216 06:24:13.572571 1627667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15992552.pem
	I1216 06:24:13.576392 1627667 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 06:24 /usr/share/ca-certificates/15992552.pem
	I1216 06:24:13.576485 1627667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15992552.pem
	I1216 06:24:13.617390 1627667 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 06:24:13.624849 1627667 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/15992552.pem /etc/ssl/certs/3ec20f2e.0
	I1216 06:24:13.632148 1627667 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:24:13.639565 1627667 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 06:24:13.647076 1627667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:24:13.651302 1627667 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:24:13.651358 1627667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:24:13.693823 1627667 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 06:24:13.701376 1627667 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 06:24:13.708941 1627667 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 06:24:13.713514 1627667 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 06:24:13.713558 1627667 kubeadm.go:401] StartCluster: {Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:24:13.713631 1627667 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 06:24:13.713695 1627667 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 06:24:13.743920 1627667 cri.go:89] found id: ""
	I1216 06:24:13.743980 1627667 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 06:24:13.752026 1627667 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 06:24:13.760079 1627667 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 06:24:13.760136 1627667 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 06:24:13.767827 1627667 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 06:24:13.767846 1627667 kubeadm.go:158] found existing configuration files:
	
	I1216 06:24:13.767901 1627667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 06:24:13.775654 1627667 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 06:24:13.775721 1627667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 06:24:13.783106 1627667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 06:24:13.790686 1627667 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 06:24:13.790751 1627667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 06:24:13.798103 1627667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 06:24:13.805830 1627667 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 06:24:13.805891 1627667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 06:24:13.813375 1627667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 06:24:13.821184 1627667 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 06:24:13.821250 1627667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 06:24:13.828656 1627667 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 06:24:13.867056 1627667 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 06:24:13.867106 1627667 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:24:13.942655 1627667 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 06:24:13.942720 1627667 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 06:24:13.942755 1627667 kubeadm.go:319] OS: Linux
	I1216 06:24:13.942798 1627667 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 06:24:13.942845 1627667 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 06:24:13.942891 1627667 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 06:24:13.942937 1627667 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 06:24:13.942984 1627667 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 06:24:13.943039 1627667 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 06:24:13.943092 1627667 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 06:24:13.943138 1627667 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 06:24:13.943184 1627667 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 06:24:14.022942 1627667 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:24:14.023046 1627667 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:24:14.023193 1627667 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:24:14.032219 1627667 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:24:14.038713 1627667 out.go:252]   - Generating certificates and keys ...
	I1216 06:24:14.038825 1627667 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:24:14.038910 1627667 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:24:14.137714 1627667 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 06:24:14.381476 1627667 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 06:24:14.585082 1627667 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 06:24:14.730013 1627667 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 06:24:15.123261 1627667 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 06:24:15.123566 1627667 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-364120 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1216 06:24:15.387691 1627667 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 06:24:15.388001 1627667 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-364120 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1216 06:24:15.706263 1627667 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 06:24:15.929436 1627667 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 06:24:16.115457 1627667 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 06:24:16.115690 1627667 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:24:16.411261 1627667 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:24:16.568215 1627667 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:24:16.761511 1627667 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:24:16.884503 1627667 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:24:17.134695 1627667 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:24:17.135484 1627667 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:24:17.139246 1627667 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:24:17.143190 1627667 out.go:252]   - Booting up control plane ...
	I1216 06:24:17.143289 1627667 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:24:17.143365 1627667 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:24:17.144318 1627667 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:24:17.159760 1627667 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:24:17.160102 1627667 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:24:17.168312 1627667 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:24:17.168557 1627667 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:24:17.168722 1627667 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:24:17.302499 1627667 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:24:17.302611 1627667 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:28:17.302874 1627667 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000288025s
	I1216 06:28:17.302892 1627667 kubeadm.go:319] 
	I1216 06:28:17.302945 1627667 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 06:28:17.302976 1627667 kubeadm.go:319] 	- The kubelet is not running
	I1216 06:28:17.303073 1627667 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 06:28:17.303077 1627667 kubeadm.go:319] 
	I1216 06:28:17.303174 1627667 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 06:28:17.303203 1627667 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 06:28:17.303231 1627667 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 06:28:17.303234 1627667 kubeadm.go:319] 
	I1216 06:28:17.308879 1627667 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 06:28:17.309276 1627667 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 06:28:17.309377 1627667 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 06:28:17.309598 1627667 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 06:28:17.309603 1627667 kubeadm.go:319] 
	I1216 06:28:17.309666 1627667 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1216 06:28:17.309783 1627667 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-364120 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-364120 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000288025s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 06:28:17.309872 1627667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 06:28:17.721078 1627667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 06:28:17.734096 1627667 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 06:28:17.734158 1627667 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 06:28:17.741968 1627667 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 06:28:17.741979 1627667 kubeadm.go:158] found existing configuration files:
	
	I1216 06:28:17.742030 1627667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 06:28:17.749993 1627667 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 06:28:17.750050 1627667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 06:28:17.757743 1627667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 06:28:17.765704 1627667 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 06:28:17.765762 1627667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 06:28:17.773344 1627667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 06:28:17.781582 1627667 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 06:28:17.781639 1627667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 06:28:17.789426 1627667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 06:28:17.797474 1627667 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 06:28:17.797532 1627667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 06:28:17.805580 1627667 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 06:28:17.848727 1627667 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 06:28:17.848774 1627667 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:28:17.919811 1627667 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 06:28:17.919876 1627667 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 06:28:17.919910 1627667 kubeadm.go:319] OS: Linux
	I1216 06:28:17.919954 1627667 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 06:28:17.920001 1627667 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 06:28:17.920048 1627667 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 06:28:17.920095 1627667 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 06:28:17.920142 1627667 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 06:28:17.920225 1627667 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 06:28:17.920270 1627667 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 06:28:17.920317 1627667 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 06:28:17.920406 1627667 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 06:28:17.986591 1627667 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:28:17.986688 1627667 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:28:17.986773 1627667 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:28:17.996930 1627667 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:28:18.009820 1627667 out.go:252]   - Generating certificates and keys ...
	I1216 06:28:18.009985 1627667 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:28:18.010059 1627667 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:28:18.010147 1627667 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 06:28:18.010217 1627667 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 06:28:18.010296 1627667 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 06:28:18.010358 1627667 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 06:28:18.010430 1627667 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 06:28:18.010503 1627667 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 06:28:18.010585 1627667 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 06:28:18.010674 1627667 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 06:28:18.010710 1627667 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 06:28:18.010787 1627667 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:28:18.252358 1627667 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:28:18.751873 1627667 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:28:18.986111 1627667 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:28:19.606578 1627667 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:28:19.829446 1627667 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:28:19.829951 1627667 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:28:19.834484 1627667 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:28:19.837869 1627667 out.go:252]   - Booting up control plane ...
	I1216 06:28:19.837973 1627667 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:28:19.838051 1627667 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:28:19.838809 1627667 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:28:19.854040 1627667 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:28:19.854144 1627667 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:28:19.862098 1627667 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:28:19.862352 1627667 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:28:19.862510 1627667 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:28:19.995159 1627667 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:28:19.995272 1627667 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:32:19.995237 1627667 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000340218s
	I1216 06:32:19.995256 1627667 kubeadm.go:319] 
	I1216 06:32:19.995312 1627667 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 06:32:19.995345 1627667 kubeadm.go:319] 	- The kubelet is not running
	I1216 06:32:19.995473 1627667 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 06:32:19.995477 1627667 kubeadm.go:319] 
	I1216 06:32:19.995580 1627667 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 06:32:19.995611 1627667 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 06:32:19.995640 1627667 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 06:32:19.995643 1627667 kubeadm.go:319] 
	I1216 06:32:19.999924 1627667 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 06:32:20.000342 1627667 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 06:32:20.000451 1627667 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 06:32:20.000717 1627667 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 06:32:20.000723 1627667 kubeadm.go:319] 
	I1216 06:32:20.000791 1627667 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 06:32:20.000843 1627667 kubeadm.go:403] duration metric: took 8m6.287289324s to StartCluster
	I1216 06:32:20.000878 1627667 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:32:20.000951 1627667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:32:20.035901 1627667 cri.go:89] found id: ""
	I1216 06:32:20.035916 1627667 logs.go:282] 0 containers: []
	W1216 06:32:20.035924 1627667 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:32:20.035929 1627667 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:32:20.035991 1627667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:32:20.063839 1627667 cri.go:89] found id: ""
	I1216 06:32:20.063854 1627667 logs.go:282] 0 containers: []
	W1216 06:32:20.063862 1627667 logs.go:284] No container was found matching "etcd"
	I1216 06:32:20.063866 1627667 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:32:20.063932 1627667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:32:20.092179 1627667 cri.go:89] found id: ""
	I1216 06:32:20.092193 1627667 logs.go:282] 0 containers: []
	W1216 06:32:20.092201 1627667 logs.go:284] No container was found matching "coredns"
	I1216 06:32:20.092206 1627667 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:32:20.092269 1627667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:32:20.119211 1627667 cri.go:89] found id: ""
	I1216 06:32:20.119225 1627667 logs.go:282] 0 containers: []
	W1216 06:32:20.119232 1627667 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:32:20.119237 1627667 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:32:20.119302 1627667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:32:20.148074 1627667 cri.go:89] found id: ""
	I1216 06:32:20.148088 1627667 logs.go:282] 0 containers: []
	W1216 06:32:20.148095 1627667 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:32:20.148100 1627667 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:32:20.148160 1627667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:32:20.182791 1627667 cri.go:89] found id: ""
	I1216 06:32:20.182805 1627667 logs.go:282] 0 containers: []
	W1216 06:32:20.182813 1627667 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:32:20.182818 1627667 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:32:20.182879 1627667 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:32:20.218349 1627667 cri.go:89] found id: ""
	I1216 06:32:20.218363 1627667 logs.go:282] 0 containers: []
	W1216 06:32:20.218370 1627667 logs.go:284] No container was found matching "kindnet"
	I1216 06:32:20.219109 1627667 logs.go:123] Gathering logs for kubelet ...
	I1216 06:32:20.219125 1627667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:32:20.285823 1627667 logs.go:123] Gathering logs for dmesg ...
	I1216 06:32:20.285847 1627667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:32:20.300638 1627667 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:32:20.300656 1627667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:32:20.369781 1627667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:32:20.361446    4842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:32:20.362019    4842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:32:20.363665    4842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:32:20.364168    4842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:32:20.365878    4842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:32:20.361446    4842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:32:20.362019    4842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:32:20.363665    4842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:32:20.364168    4842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:32:20.365878    4842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:32:20.369791 1627667 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:32:20.369802 1627667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:32:20.402645 1627667 logs.go:123] Gathering logs for container status ...
	I1216 06:32:20.402664 1627667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1216 06:32:20.431779 1627667 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000340218s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1216 06:32:20.431813 1627667 out.go:285] * 
	W1216 06:32:20.431877 1627667 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000340218s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 06:32:20.431898 1627667 out.go:285] * 
	W1216 06:32:20.434021 1627667 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 06:32:20.438950 1627667 out.go:203] 
	W1216 06:32:20.441872 1627667 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000340218s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 06:32:20.442089 1627667 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 06:32:20.442121 1627667 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 06:32:20.446850 1627667 out.go:203] 
	
	
	==> CRI-O <==
	Dec 16 06:24:11 functional-364120 crio[844]: time="2025-12-16T06:24:11.749434542Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 16 06:24:11 functional-364120 crio[844]: time="2025-12-16T06:24:11.749621043Z" level=info msg="Starting seccomp notifier watcher"
	Dec 16 06:24:11 functional-364120 crio[844]: time="2025-12-16T06:24:11.749758741Z" level=info msg="Create NRI interface"
	Dec 16 06:24:11 functional-364120 crio[844]: time="2025-12-16T06:24:11.749931803Z" level=info msg="built-in NRI default validator is disabled"
	Dec 16 06:24:11 functional-364120 crio[844]: time="2025-12-16T06:24:11.749952406Z" level=info msg="runtime interface created"
	Dec 16 06:24:11 functional-364120 crio[844]: time="2025-12-16T06:24:11.749966051Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 16 06:24:11 functional-364120 crio[844]: time="2025-12-16T06:24:11.749972246Z" level=info msg="runtime interface starting up..."
	Dec 16 06:24:11 functional-364120 crio[844]: time="2025-12-16T06:24:11.749978392Z" level=info msg="starting plugins..."
	Dec 16 06:24:11 functional-364120 crio[844]: time="2025-12-16T06:24:11.749993251Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 16 06:24:11 functional-364120 crio[844]: time="2025-12-16T06:24:11.750063421Z" level=info msg="No systemd watchdog enabled"
	Dec 16 06:24:11 functional-364120 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 16 06:24:14 functional-364120 crio[844]: time="2025-12-16T06:24:14.026456095Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=90181666-933a-48ac-92c0-4e68fc759dab name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:24:14 functional-364120 crio[844]: time="2025-12-16T06:24:14.028235625Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=fc8ee0a3-1e4a-4f9f-a400-93b050bcfec6 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:24:14 functional-364120 crio[844]: time="2025-12-16T06:24:14.029030699Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=e9726130-617a-4929-8a6f-02273d9a5402 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:24:14 functional-364120 crio[844]: time="2025-12-16T06:24:14.029601568Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=5f176322-b485-494b-959e-75ba5424d314 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:24:14 functional-364120 crio[844]: time="2025-12-16T06:24:14.030177156Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=79638b71-16ce-41fe-8558-ecfb5a871059 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:24:14 functional-364120 crio[844]: time="2025-12-16T06:24:14.030753269Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=e7ae4361-c4e7-4e59-9d9b-e2f53d17c4a6 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:24:14 functional-364120 crio[844]: time="2025-12-16T06:24:14.031195753Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=2678313f-2ab4-43f4-8a0b-b5512fdafa3c name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:28:17 functional-364120 crio[844]: time="2025-12-16T06:28:17.989496013Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=f1d390bf-389f-4770-a36a-ab765fd6dc2f name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:28:17 functional-364120 crio[844]: time="2025-12-16T06:28:17.990403952Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=429ce945-3b7f-4853-a1d9-9b7ed5539f2a name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:28:17 functional-364120 crio[844]: time="2025-12-16T06:28:17.990891124Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=cffaa8b6-c1ef-4803-adbd-98beac53a82e name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:28:17 functional-364120 crio[844]: time="2025-12-16T06:28:17.991358349Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=17b51815-727b-4ea7-b8aa-51f878c8e145 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:28:17 functional-364120 crio[844]: time="2025-12-16T06:28:17.991842485Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=c22be01a-2f46-4127-a08d-b292735cc61e name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:28:17 functional-364120 crio[844]: time="2025-12-16T06:28:17.992311917Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=306135d7-e235-4c79-bd93-d6a332c6b8da name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:28:17 functional-364120 crio[844]: time="2025-12-16T06:28:17.992858338Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=a31b1afa-ba85-46a0-b017-5fa4a0b43970 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:32:21.404663    4960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:32:21.405410    4960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:32:21.407343    4960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:32:21.407902    4960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:32:21.409667    4960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 06:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec16 06:13] overlayfs: idmapped layers are currently not supported
	[Dec16 06:19] overlayfs: idmapped layers are currently not supported
	[Dec16 06:20] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 06:32:21 up  9:14,  0 user,  load average: 0.95, 0.60, 1.10
	Linux functional-364120 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 06:32:18 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:32:19 functional-364120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 645.
	Dec 16 06:32:19 functional-364120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:32:19 functional-364120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:32:19 functional-364120 kubelet[4765]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:32:19 functional-364120 kubelet[4765]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:32:19 functional-364120 kubelet[4765]: E1216 06:32:19.443149    4765 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:32:19 functional-364120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:32:19 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:32:20 functional-364120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 646.
	Dec 16 06:32:20 functional-364120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:32:20 functional-364120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:32:20 functional-364120 kubelet[4812]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:32:20 functional-364120 kubelet[4812]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:32:20 functional-364120 kubelet[4812]: E1216 06:32:20.211254    4812 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:32:20 functional-364120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:32:20 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:32:20 functional-364120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 647.
	Dec 16 06:32:20 functional-364120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:32:20 functional-364120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:32:20 functional-364120 kubelet[4875]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:32:20 functional-364120 kubelet[4875]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:32:20 functional-364120 kubelet[4875]: E1216 06:32:20.961685    4875 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:32:20 functional-364120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:32:20 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-364120 -n functional-364120
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-364120 -n functional-364120: exit status 6 (360.822318ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 06:32:21.882239 1633577 status.go:458] kubeconfig endpoint: get endpoint: "functional-364120" does not appear in /home/jenkins/minikube-integration/22141-1596013/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-364120" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (501.88s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (369.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1216 06:32:21.899438 1599255 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-364120 --alsologtostderr -v=8
E1216 06:33:08.325404 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-487532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:33:36.033188 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-487532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:36:06.816967 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:37:29.890504 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:38:08.326347 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-487532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-364120 --alsologtostderr -v=8: exit status 80 (6m6.023322078s)

                                                
                                                
-- stdout --
	* [functional-364120] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22141
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-364120" primary control-plane node in "functional-364120" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 06:32:21.945678 1633651 out.go:360] Setting OutFile to fd 1 ...
	I1216 06:32:21.945884 1633651 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:32:21.945913 1633651 out.go:374] Setting ErrFile to fd 2...
	I1216 06:32:21.945938 1633651 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:32:21.946236 1633651 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 06:32:21.946683 1633651 out.go:368] Setting JSON to false
	I1216 06:32:21.947701 1633651 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":33293,"bootTime":1765833449,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1216 06:32:21.947809 1633651 start.go:143] virtualization:  
	I1216 06:32:21.951426 1633651 out.go:179] * [functional-364120] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 06:32:21.955191 1633651 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 06:32:21.955256 1633651 notify.go:221] Checking for updates...
	I1216 06:32:21.958173 1633651 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 06:32:21.961154 1633651 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:32:21.964261 1633651 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube
	I1216 06:32:21.967271 1633651 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 06:32:21.970206 1633651 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 06:32:21.973784 1633651 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 06:32:21.973958 1633651 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 06:32:22.008677 1633651 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 06:32:22.008820 1633651 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:32:22.071471 1633651 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 06:32:22.061898568 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 06:32:22.071599 1633651 docker.go:319] overlay module found
	I1216 06:32:22.074586 1633651 out.go:179] * Using the docker driver based on existing profile
	I1216 06:32:22.077482 1633651 start.go:309] selected driver: docker
	I1216 06:32:22.077504 1633651 start.go:927] validating driver "docker" against &{Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:32:22.077607 1633651 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 06:32:22.077718 1633651 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:32:22.133247 1633651 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 06:32:22.124039104 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 06:32:22.133687 1633651 cni.go:84] Creating CNI manager for ""
	I1216 06:32:22.133753 1633651 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 06:32:22.133810 1633651 start.go:353] cluster config:
	{Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:32:22.136881 1633651 out.go:179] * Starting "functional-364120" primary control-plane node in "functional-364120" cluster
	I1216 06:32:22.139682 1633651 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 06:32:22.142506 1633651 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 06:32:22.145532 1633651 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 06:32:22.145589 1633651 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1216 06:32:22.145600 1633651 cache.go:65] Caching tarball of preloaded images
	I1216 06:32:22.145641 1633651 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 06:32:22.145690 1633651 preload.go:238] Found /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 06:32:22.145701 1633651 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1216 06:32:22.145813 1633651 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/config.json ...
	I1216 06:32:22.165180 1633651 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 06:32:22.165200 1633651 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 06:32:22.165222 1633651 cache.go:243] Successfully downloaded all kic artifacts
	I1216 06:32:22.165256 1633651 start.go:360] acquireMachinesLock for functional-364120: {Name:mkbf042218fd4d1baa11f8b1e4a71170f4ad9912 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:32:22.165333 1633651 start.go:364] duration metric: took 48.796µs to acquireMachinesLock for "functional-364120"
	I1216 06:32:22.165354 1633651 start.go:96] Skipping create...Using existing machine configuration
	I1216 06:32:22.165360 1633651 fix.go:54] fixHost starting: 
	I1216 06:32:22.165613 1633651 cli_runner.go:164] Run: docker container inspect functional-364120 --format={{.State.Status}}
	I1216 06:32:22.182587 1633651 fix.go:112] recreateIfNeeded on functional-364120: state=Running err=<nil>
	W1216 06:32:22.182616 1633651 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 06:32:22.185776 1633651 out.go:252] * Updating the running docker "functional-364120" container ...
	I1216 06:32:22.185814 1633651 machine.go:94] provisionDockerMachine start ...
	I1216 06:32:22.185896 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:22.204643 1633651 main.go:143] libmachine: Using SSH client type: native
	I1216 06:32:22.205060 1633651 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1216 06:32:22.205076 1633651 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 06:32:22.340733 1633651 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-364120
	
	I1216 06:32:22.340761 1633651 ubuntu.go:182] provisioning hostname "functional-364120"
	I1216 06:32:22.340833 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:22.359374 1633651 main.go:143] libmachine: Using SSH client type: native
	I1216 06:32:22.359683 1633651 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1216 06:32:22.359701 1633651 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-364120 && echo "functional-364120" | sudo tee /etc/hostname
	I1216 06:32:22.513698 1633651 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-364120
	
	I1216 06:32:22.513777 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:22.532110 1633651 main.go:143] libmachine: Using SSH client type: native
	I1216 06:32:22.532428 1633651 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1216 06:32:22.532445 1633651 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-364120' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-364120/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-364120' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 06:32:22.668828 1633651 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 06:32:22.668856 1633651 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-1596013/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-1596013/.minikube}
	I1216 06:32:22.668881 1633651 ubuntu.go:190] setting up certificates
	I1216 06:32:22.668900 1633651 provision.go:84] configureAuth start
	I1216 06:32:22.668975 1633651 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-364120
	I1216 06:32:22.686750 1633651 provision.go:143] copyHostCerts
	I1216 06:32:22.686794 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 06:32:22.686839 1633651 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem, removing ...
	I1216 06:32:22.686850 1633651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 06:32:22.686924 1633651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem (1675 bytes)
	I1216 06:32:22.687014 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 06:32:22.687038 1633651 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem, removing ...
	I1216 06:32:22.687049 1633651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 06:32:22.687078 1633651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem (1078 bytes)
	I1216 06:32:22.687125 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 06:32:22.687146 1633651 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem, removing ...
	I1216 06:32:22.687154 1633651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 06:32:22.687181 1633651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem (1123 bytes)
	I1216 06:32:22.687234 1633651 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem org=jenkins.functional-364120 san=[127.0.0.1 192.168.49.2 functional-364120 localhost minikube]
	I1216 06:32:22.948191 1633651 provision.go:177] copyRemoteCerts
	I1216 06:32:22.948261 1633651 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 06:32:22.948301 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:22.965164 1633651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:32:23.060207 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1216 06:32:23.060306 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 06:32:23.077647 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1216 06:32:23.077712 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 06:32:23.095215 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1216 06:32:23.095292 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 06:32:23.112813 1633651 provision.go:87] duration metric: took 443.895655ms to configureAuth
	I1216 06:32:23.112841 1633651 ubuntu.go:206] setting minikube options for container-runtime
	I1216 06:32:23.113039 1633651 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 06:32:23.113160 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:23.130832 1633651 main.go:143] libmachine: Using SSH client type: native
	I1216 06:32:23.131171 1633651 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1216 06:32:23.131200 1633651 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 06:32:23.456336 1633651 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 06:32:23.456407 1633651 machine.go:97] duration metric: took 1.270583728s to provisionDockerMachine
	I1216 06:32:23.456430 1633651 start.go:293] postStartSetup for "functional-364120" (driver="docker")
	I1216 06:32:23.456444 1633651 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 06:32:23.456549 1633651 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 06:32:23.456623 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:23.474584 1633651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:32:23.572573 1633651 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 06:32:23.576065 1633651 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1216 06:32:23.576089 1633651 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1216 06:32:23.576094 1633651 command_runner.go:130] > VERSION_ID="12"
	I1216 06:32:23.576099 1633651 command_runner.go:130] > VERSION="12 (bookworm)"
	I1216 06:32:23.576104 1633651 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1216 06:32:23.576107 1633651 command_runner.go:130] > ID=debian
	I1216 06:32:23.576111 1633651 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1216 06:32:23.576116 1633651 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1216 06:32:23.576121 1633651 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1216 06:32:23.576161 1633651 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 06:32:23.576184 1633651 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 06:32:23.576195 1633651 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/addons for local assets ...
	I1216 06:32:23.576257 1633651 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/files for local assets ...
	I1216 06:32:23.576334 1633651 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> 15992552.pem in /etc/ssl/certs
	I1216 06:32:23.576345 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> /etc/ssl/certs/15992552.pem
	I1216 06:32:23.576419 1633651 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/test/nested/copy/1599255/hosts -> hosts in /etc/test/nested/copy/1599255
	I1216 06:32:23.576428 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/test/nested/copy/1599255/hosts -> /etc/test/nested/copy/1599255/hosts
	I1216 06:32:23.576497 1633651 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1599255
	I1216 06:32:23.584272 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 06:32:23.602073 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/test/nested/copy/1599255/hosts --> /etc/test/nested/copy/1599255/hosts (40 bytes)
	I1216 06:32:23.620211 1633651 start.go:296] duration metric: took 163.749097ms for postStartSetup
	I1216 06:32:23.620332 1633651 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 06:32:23.620393 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:23.637607 1633651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:32:23.729817 1633651 command_runner.go:130] > 11%
	I1216 06:32:23.729920 1633651 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 06:32:23.734460 1633651 command_runner.go:130] > 173G
	I1216 06:32:23.734888 1633651 fix.go:56] duration metric: took 1.569523929s for fixHost
	I1216 06:32:23.734910 1633651 start.go:83] releasing machines lock for "functional-364120", held for 1.569567934s
	I1216 06:32:23.734992 1633651 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-364120
	I1216 06:32:23.753392 1633651 ssh_runner.go:195] Run: cat /version.json
	I1216 06:32:23.753419 1633651 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 06:32:23.753445 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:23.753482 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:23.775365 1633651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:32:23.776190 1633651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:32:23.872489 1633651 command_runner.go:130] > {"iso_version": "v1.37.0-1765579389-22117", "kicbase_version": "v0.0.48-1765661130-22141", "minikube_version": "v1.37.0", "commit": "cbb33128a244032d08f8fc6e6c9f03b30f0da3e4"}
	I1216 06:32:23.964085 1633651 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1216 06:32:23.966949 1633651 ssh_runner.go:195] Run: systemctl --version
	I1216 06:32:23.972881 1633651 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1216 06:32:23.972927 1633651 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1216 06:32:23.973332 1633651 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 06:32:24.017041 1633651 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1216 06:32:24.021688 1633651 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1216 06:32:24.021875 1633651 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 06:32:24.021943 1633651 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 06:32:24.030849 1633651 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 06:32:24.030874 1633651 start.go:496] detecting cgroup driver to use...
	I1216 06:32:24.030909 1633651 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:32:24.030973 1633651 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 06:32:24.046872 1633651 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:32:24.060299 1633651 docker.go:218] disabling cri-docker service (if available) ...
	I1216 06:32:24.060392 1633651 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 06:32:24.076826 1633651 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 06:32:24.090325 1633651 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 06:32:24.210022 1633651 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 06:32:24.329836 1633651 docker.go:234] disabling docker service ...
	I1216 06:32:24.329935 1633651 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 06:32:24.345813 1633651 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 06:32:24.359799 1633651 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 06:32:24.482084 1633651 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 06:32:24.592216 1633651 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 06:32:24.607323 1633651 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 06:32:24.620059 1633651 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1216 06:32:24.621570 1633651 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 06:32:24.621685 1633651 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:32:24.630471 1633651 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 06:32:24.630583 1633651 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:32:24.638917 1633651 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:32:24.647722 1633651 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:32:24.656274 1633651 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 06:32:24.664335 1633651 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:32:24.674249 1633651 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:32:24.682423 1633651 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:32:24.691805 1633651 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 06:32:24.699096 1633651 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1216 06:32:24.700134 1633651 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 06:32:24.707996 1633651 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:32:24.828004 1633651 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 06:32:24.995020 1633651 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 06:32:24.995147 1633651 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 06:32:24.998673 1633651 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1216 06:32:24.998710 1633651 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1216 06:32:24.998717 1633651 command_runner.go:130] > Device: 0,73	Inode: 1638        Links: 1
	I1216 06:32:24.998724 1633651 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1216 06:32:24.998732 1633651 command_runner.go:130] > Access: 2025-12-16 06:32:24.929681899 +0000
	I1216 06:32:24.998737 1633651 command_runner.go:130] > Modify: 2025-12-16 06:32:24.929681899 +0000
	I1216 06:32:24.998743 1633651 command_runner.go:130] > Change: 2025-12-16 06:32:24.929681899 +0000
	I1216 06:32:24.998747 1633651 command_runner.go:130] >  Birth: -
	I1216 06:32:24.999054 1633651 start.go:564] Will wait 60s for crictl version
	I1216 06:32:24.999171 1633651 ssh_runner.go:195] Run: which crictl
	I1216 06:32:25.003803 1633651 command_runner.go:130] > /usr/local/bin/crictl
	I1216 06:32:25.003920 1633651 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 06:32:25.030365 1633651 command_runner.go:130] > Version:  0.1.0
	I1216 06:32:25.030401 1633651 command_runner.go:130] > RuntimeName:  cri-o
	I1216 06:32:25.030407 1633651 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1216 06:32:25.030415 1633651 command_runner.go:130] > RuntimeApiVersion:  v1
	I1216 06:32:25.032653 1633651 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 06:32:25.032766 1633651 ssh_runner.go:195] Run: crio --version
	I1216 06:32:25.062220 1633651 command_runner.go:130] > crio version 1.34.3
	I1216 06:32:25.062244 1633651 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1216 06:32:25.062252 1633651 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1216 06:32:25.062258 1633651 command_runner.go:130] >    GitTreeState:   dirty
	I1216 06:32:25.062271 1633651 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1216 06:32:25.062277 1633651 command_runner.go:130] >    GoVersion:      go1.24.6
	I1216 06:32:25.062281 1633651 command_runner.go:130] >    Compiler:       gc
	I1216 06:32:25.062287 1633651 command_runner.go:130] >    Platform:       linux/arm64
	I1216 06:32:25.062295 1633651 command_runner.go:130] >    Linkmode:       static
	I1216 06:32:25.062298 1633651 command_runner.go:130] >    BuildTags:
	I1216 06:32:25.062306 1633651 command_runner.go:130] >      static
	I1216 06:32:25.062310 1633651 command_runner.go:130] >      netgo
	I1216 06:32:25.062314 1633651 command_runner.go:130] >      osusergo
	I1216 06:32:25.062318 1633651 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1216 06:32:25.062324 1633651 command_runner.go:130] >      seccomp
	I1216 06:32:25.062328 1633651 command_runner.go:130] >      apparmor
	I1216 06:32:25.062335 1633651 command_runner.go:130] >      selinux
	I1216 06:32:25.062355 1633651 command_runner.go:130] >    LDFlags:          unknown
	I1216 06:32:25.062366 1633651 command_runner.go:130] >    SeccompEnabled:   true
	I1216 06:32:25.062371 1633651 command_runner.go:130] >    AppArmorEnabled:  false
	I1216 06:32:25.062783 1633651 ssh_runner.go:195] Run: crio --version
	I1216 06:32:25.091083 1633651 command_runner.go:130] > crio version 1.34.3
	I1216 06:32:25.091135 1633651 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1216 06:32:25.091142 1633651 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1216 06:32:25.091169 1633651 command_runner.go:130] >    GitTreeState:   dirty
	I1216 06:32:25.091182 1633651 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1216 06:32:25.091188 1633651 command_runner.go:130] >    GoVersion:      go1.24.6
	I1216 06:32:25.091193 1633651 command_runner.go:130] >    Compiler:       gc
	I1216 06:32:25.091205 1633651 command_runner.go:130] >    Platform:       linux/arm64
	I1216 06:32:25.091210 1633651 command_runner.go:130] >    Linkmode:       static
	I1216 06:32:25.091218 1633651 command_runner.go:130] >    BuildTags:
	I1216 06:32:25.091223 1633651 command_runner.go:130] >      static
	I1216 06:32:25.091226 1633651 command_runner.go:130] >      netgo
	I1216 06:32:25.091230 1633651 command_runner.go:130] >      osusergo
	I1216 06:32:25.091244 1633651 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1216 06:32:25.091254 1633651 command_runner.go:130] >      seccomp
	I1216 06:32:25.091262 1633651 command_runner.go:130] >      apparmor
	I1216 06:32:25.091274 1633651 command_runner.go:130] >      selinux
	I1216 06:32:25.091278 1633651 command_runner.go:130] >    LDFlags:          unknown
	I1216 06:32:25.091282 1633651 command_runner.go:130] >    SeccompEnabled:   true
	I1216 06:32:25.091286 1633651 command_runner.go:130] >    AppArmorEnabled:  false
	I1216 06:32:25.097058 1633651 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1216 06:32:25.100055 1633651 cli_runner.go:164] Run: docker network inspect functional-364120 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 06:32:25.116990 1633651 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 06:32:25.121062 1633651 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1216 06:32:25.121217 1633651 kubeadm.go:884] updating cluster {Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 06:32:25.121338 1633651 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 06:32:25.121400 1633651 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 06:32:25.161132 1633651 command_runner.go:130] > {
	I1216 06:32:25.161156 1633651 command_runner.go:130] >   "images":  [
	I1216 06:32:25.161162 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161171 1633651 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1216 06:32:25.161176 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161183 1633651 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1216 06:32:25.161197 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161202 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161212 1633651 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1216 06:32:25.161220 1633651 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1216 06:32:25.161224 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161229 1633651 command_runner.go:130] >       "size":  "111333938",
	I1216 06:32:25.161237 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.161245 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.161248 1633651 command_runner.go:130] >     },
	I1216 06:32:25.161253 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161267 1633651 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1216 06:32:25.161272 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161278 1633651 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1216 06:32:25.161289 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161295 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161303 1633651 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1216 06:32:25.161313 1633651 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1216 06:32:25.161317 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161325 1633651 command_runner.go:130] >       "size":  "29037500",
	I1216 06:32:25.161333 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.161342 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.161350 1633651 command_runner.go:130] >     },
	I1216 06:32:25.161353 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161360 1633651 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1216 06:32:25.161368 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161373 1633651 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1216 06:32:25.161376 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161380 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161388 1633651 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1216 06:32:25.161400 1633651 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1216 06:32:25.161403 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161408 1633651 command_runner.go:130] >       "size":  "74491780",
	I1216 06:32:25.161415 1633651 command_runner.go:130] >       "username":  "nonroot",
	I1216 06:32:25.161424 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.161431 1633651 command_runner.go:130] >     },
	I1216 06:32:25.161435 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161442 1633651 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1216 06:32:25.161450 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161456 1633651 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1216 06:32:25.161459 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161469 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161477 1633651 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1216 06:32:25.161485 1633651 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1216 06:32:25.161489 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161493 1633651 command_runner.go:130] >       "size":  "60857170",
	I1216 06:32:25.161499 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.161511 1633651 command_runner.go:130] >         "value":  "0"
	I1216 06:32:25.161514 1633651 command_runner.go:130] >       },
	I1216 06:32:25.161529 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.161540 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.161544 1633651 command_runner.go:130] >     },
	I1216 06:32:25.161554 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161567 1633651 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1216 06:32:25.161571 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161578 1633651 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1216 06:32:25.161582 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161588 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161601 1633651 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1216 06:32:25.161614 1633651 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1216 06:32:25.161618 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161623 1633651 command_runner.go:130] >       "size":  "84949999",
	I1216 06:32:25.161631 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.161636 1633651 command_runner.go:130] >         "value":  "0"
	I1216 06:32:25.161639 1633651 command_runner.go:130] >       },
	I1216 06:32:25.161643 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.161647 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.161667 1633651 command_runner.go:130] >     },
	I1216 06:32:25.161675 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161682 1633651 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1216 06:32:25.161686 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161692 1633651 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1216 06:32:25.161701 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161705 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161714 1633651 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1216 06:32:25.161726 1633651 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1216 06:32:25.161730 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161734 1633651 command_runner.go:130] >       "size":  "72170325",
	I1216 06:32:25.161738 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.161743 1633651 command_runner.go:130] >         "value":  "0"
	I1216 06:32:25.161748 1633651 command_runner.go:130] >       },
	I1216 06:32:25.161753 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.161758 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.161761 1633651 command_runner.go:130] >     },
	I1216 06:32:25.161764 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161771 1633651 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1216 06:32:25.161779 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161785 1633651 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1216 06:32:25.161788 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161793 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161801 1633651 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1216 06:32:25.161814 1633651 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1216 06:32:25.161818 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161822 1633651 command_runner.go:130] >       "size":  "74106775",
	I1216 06:32:25.161826 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.161830 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.161836 1633651 command_runner.go:130] >     },
	I1216 06:32:25.161839 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161846 1633651 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1216 06:32:25.161850 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161863 1633651 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1216 06:32:25.161870 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161874 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161882 1633651 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1216 06:32:25.161905 1633651 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1216 06:32:25.161913 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161918 1633651 command_runner.go:130] >       "size":  "49822549",
	I1216 06:32:25.161921 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.161925 1633651 command_runner.go:130] >         "value":  "0"
	I1216 06:32:25.161929 1633651 command_runner.go:130] >       },
	I1216 06:32:25.161933 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.161937 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.161943 1633651 command_runner.go:130] >     },
	I1216 06:32:25.161947 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161956 1633651 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1216 06:32:25.161960 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161965 1633651 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1216 06:32:25.161971 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161975 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161995 1633651 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1216 06:32:25.162003 1633651 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1216 06:32:25.162006 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.162010 1633651 command_runner.go:130] >       "size":  "519884",
	I1216 06:32:25.162013 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.162017 1633651 command_runner.go:130] >         "value":  "65535"
	I1216 06:32:25.162020 1633651 command_runner.go:130] >       },
	I1216 06:32:25.162029 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.162036 1633651 command_runner.go:130] >       "pinned":  true
	I1216 06:32:25.162040 1633651 command_runner.go:130] >     }
	I1216 06:32:25.162043 1633651 command_runner.go:130] >   ]
	I1216 06:32:25.162046 1633651 command_runner.go:130] > }
	I1216 06:32:25.162230 1633651 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 06:32:25.162244 1633651 crio.go:433] Images already preloaded, skipping extraction
	I1216 06:32:25.162311 1633651 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 06:32:25.189040 1633651 command_runner.go:130] > {
	I1216 06:32:25.189061 1633651 command_runner.go:130] >   "images":  [
	I1216 06:32:25.189066 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189085 1633651 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1216 06:32:25.189090 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189096 1633651 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1216 06:32:25.189100 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189103 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189112 1633651 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1216 06:32:25.189120 1633651 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1216 06:32:25.189125 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189133 1633651 command_runner.go:130] >       "size":  "111333938",
	I1216 06:32:25.189141 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.189146 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.189157 1633651 command_runner.go:130] >     },
	I1216 06:32:25.189161 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189168 1633651 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1216 06:32:25.189171 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189177 1633651 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1216 06:32:25.189180 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189184 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189193 1633651 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1216 06:32:25.189201 1633651 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1216 06:32:25.189204 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189208 1633651 command_runner.go:130] >       "size":  "29037500",
	I1216 06:32:25.189212 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.189217 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.189220 1633651 command_runner.go:130] >     },
	I1216 06:32:25.189223 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189230 1633651 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1216 06:32:25.189233 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189239 1633651 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1216 06:32:25.189242 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189246 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189255 1633651 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1216 06:32:25.189263 1633651 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1216 06:32:25.189266 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189270 1633651 command_runner.go:130] >       "size":  "74491780",
	I1216 06:32:25.189274 1633651 command_runner.go:130] >       "username":  "nonroot",
	I1216 06:32:25.189278 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.189281 1633651 command_runner.go:130] >     },
	I1216 06:32:25.189284 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189291 1633651 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1216 06:32:25.189295 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189300 1633651 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1216 06:32:25.189309 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189313 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189322 1633651 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1216 06:32:25.189330 1633651 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1216 06:32:25.189333 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189337 1633651 command_runner.go:130] >       "size":  "60857170",
	I1216 06:32:25.189341 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.189345 1633651 command_runner.go:130] >         "value":  "0"
	I1216 06:32:25.189348 1633651 command_runner.go:130] >       },
	I1216 06:32:25.189357 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.189361 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.189364 1633651 command_runner.go:130] >     },
	I1216 06:32:25.189367 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189375 1633651 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1216 06:32:25.189378 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189384 1633651 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1216 06:32:25.189387 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189391 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189399 1633651 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1216 06:32:25.189407 1633651 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1216 06:32:25.189411 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189420 1633651 command_runner.go:130] >       "size":  "84949999",
	I1216 06:32:25.189423 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.189427 1633651 command_runner.go:130] >         "value":  "0"
	I1216 06:32:25.189431 1633651 command_runner.go:130] >       },
	I1216 06:32:25.189435 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.189439 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.189444 1633651 command_runner.go:130] >     },
	I1216 06:32:25.189453 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189460 1633651 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1216 06:32:25.189464 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189469 1633651 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1216 06:32:25.189473 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189486 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189495 1633651 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1216 06:32:25.189505 1633651 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1216 06:32:25.189508 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189513 1633651 command_runner.go:130] >       "size":  "72170325",
	I1216 06:32:25.189516 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.189524 1633651 command_runner.go:130] >         "value":  "0"
	I1216 06:32:25.189527 1633651 command_runner.go:130] >       },
	I1216 06:32:25.189531 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.189536 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.189539 1633651 command_runner.go:130] >     },
	I1216 06:32:25.189542 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189549 1633651 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1216 06:32:25.189553 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189558 1633651 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1216 06:32:25.189561 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189564 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189572 1633651 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1216 06:32:25.189580 1633651 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1216 06:32:25.189583 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189587 1633651 command_runner.go:130] >       "size":  "74106775",
	I1216 06:32:25.189591 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.189595 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.189597 1633651 command_runner.go:130] >     },
	I1216 06:32:25.189600 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189607 1633651 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1216 06:32:25.189611 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189616 1633651 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1216 06:32:25.189620 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189623 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189631 1633651 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1216 06:32:25.189649 1633651 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1216 06:32:25.189653 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189660 1633651 command_runner.go:130] >       "size":  "49822549",
	I1216 06:32:25.189664 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.189668 1633651 command_runner.go:130] >         "value":  "0"
	I1216 06:32:25.189671 1633651 command_runner.go:130] >       },
	I1216 06:32:25.189675 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.189679 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.189682 1633651 command_runner.go:130] >     },
	I1216 06:32:25.189685 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189691 1633651 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1216 06:32:25.189695 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189700 1633651 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1216 06:32:25.189703 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189707 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189714 1633651 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1216 06:32:25.189722 1633651 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1216 06:32:25.189725 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189729 1633651 command_runner.go:130] >       "size":  "519884",
	I1216 06:32:25.189732 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.189736 1633651 command_runner.go:130] >         "value":  "65535"
	I1216 06:32:25.189740 1633651 command_runner.go:130] >       },
	I1216 06:32:25.189744 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.189748 1633651 command_runner.go:130] >       "pinned":  true
	I1216 06:32:25.189751 1633651 command_runner.go:130] >     }
	I1216 06:32:25.189754 1633651 command_runner.go:130] >   ]
	I1216 06:32:25.189758 1633651 command_runner.go:130] > }
	I1216 06:32:25.192082 1633651 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 06:32:25.192103 1633651 cache_images.go:86] Images are preloaded, skipping loading
	I1216 06:32:25.192110 1633651 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1216 06:32:25.192213 1633651 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-364120 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 06:32:25.192293 1633651 ssh_runner.go:195] Run: crio config
	I1216 06:32:25.241430 1633651 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1216 06:32:25.241454 1633651 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1216 06:32:25.241463 1633651 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1216 06:32:25.241467 1633651 command_runner.go:130] > #
	I1216 06:32:25.241474 1633651 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1216 06:32:25.241481 1633651 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1216 06:32:25.241487 1633651 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1216 06:32:25.241503 1633651 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1216 06:32:25.241507 1633651 command_runner.go:130] > # reload'.
	I1216 06:32:25.241513 1633651 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1216 06:32:25.241520 1633651 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1216 06:32:25.241526 1633651 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1216 06:32:25.241533 1633651 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1216 06:32:25.241546 1633651 command_runner.go:130] > [crio]
	I1216 06:32:25.241552 1633651 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1216 06:32:25.241558 1633651 command_runner.go:130] > # containers images, in this directory.
	I1216 06:32:25.242467 1633651 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1216 06:32:25.242525 1633651 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1216 06:32:25.243204 1633651 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1216 06:32:25.243220 1633651 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1216 06:32:25.243745 1633651 command_runner.go:130] > # imagestore = ""
	I1216 06:32:25.243759 1633651 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1216 06:32:25.243765 1633651 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1216 06:32:25.244384 1633651 command_runner.go:130] > # storage_driver = "overlay"
	I1216 06:32:25.244405 1633651 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1216 06:32:25.244412 1633651 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1216 06:32:25.244775 1633651 command_runner.go:130] > # storage_option = [
	I1216 06:32:25.245138 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.245151 1633651 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1216 06:32:25.245190 1633651 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1216 06:32:25.245804 1633651 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1216 06:32:25.245817 1633651 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1216 06:32:25.245829 1633651 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1216 06:32:25.245834 1633651 command_runner.go:130] > # always happen on a node reboot
	I1216 06:32:25.246485 1633651 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1216 06:32:25.246511 1633651 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1216 06:32:25.246534 1633651 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1216 06:32:25.246545 1633651 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1216 06:32:25.247059 1633651 command_runner.go:130] > # version_file_persist = ""
	I1216 06:32:25.247081 1633651 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1216 06:32:25.247091 1633651 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1216 06:32:25.247784 1633651 command_runner.go:130] > # internal_wipe = true
	I1216 06:32:25.247805 1633651 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1216 06:32:25.247812 1633651 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1216 06:32:25.248459 1633651 command_runner.go:130] > # internal_repair = true
	I1216 06:32:25.248493 1633651 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1216 06:32:25.248501 1633651 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1216 06:32:25.248507 1633651 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1216 06:32:25.249140 1633651 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1216 06:32:25.249157 1633651 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1216 06:32:25.249161 1633651 command_runner.go:130] > [crio.api]
	I1216 06:32:25.249167 1633651 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1216 06:32:25.251400 1633651 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1216 06:32:25.251419 1633651 command_runner.go:130] > # IP address on which the stream server will listen.
	I1216 06:32:25.251426 1633651 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1216 06:32:25.251453 1633651 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1216 06:32:25.251465 1633651 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1216 06:32:25.251470 1633651 command_runner.go:130] > # stream_port = "0"
	I1216 06:32:25.251476 1633651 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1216 06:32:25.251480 1633651 command_runner.go:130] > # stream_enable_tls = false
	I1216 06:32:25.251487 1633651 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1216 06:32:25.251494 1633651 command_runner.go:130] > # stream_idle_timeout = ""
	I1216 06:32:25.251501 1633651 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1216 06:32:25.251510 1633651 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1216 06:32:25.251527 1633651 command_runner.go:130] > # stream_tls_cert = ""
	I1216 06:32:25.251540 1633651 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1216 06:32:25.251546 1633651 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1216 06:32:25.251563 1633651 command_runner.go:130] > # stream_tls_key = ""
	I1216 06:32:25.251575 1633651 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1216 06:32:25.251585 1633651 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1216 06:32:25.251591 1633651 command_runner.go:130] > # automatically pick up the changes.
	I1216 06:32:25.251603 1633651 command_runner.go:130] > # stream_tls_ca = ""
	I1216 06:32:25.251622 1633651 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1216 06:32:25.251658 1633651 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1216 06:32:25.251672 1633651 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1216 06:32:25.251677 1633651 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1216 06:32:25.251692 1633651 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1216 06:32:25.251703 1633651 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1216 06:32:25.251707 1633651 command_runner.go:130] > [crio.runtime]
	I1216 06:32:25.251713 1633651 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1216 06:32:25.251719 1633651 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1216 06:32:25.251735 1633651 command_runner.go:130] > # "nofile=1024:2048"
	I1216 06:32:25.251746 1633651 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1216 06:32:25.251751 1633651 command_runner.go:130] > # default_ulimits = [
	I1216 06:32:25.251754 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.251760 1633651 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1216 06:32:25.251767 1633651 command_runner.go:130] > # no_pivot = false
	I1216 06:32:25.251773 1633651 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1216 06:32:25.251779 1633651 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1216 06:32:25.251788 1633651 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1216 06:32:25.251794 1633651 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1216 06:32:25.251799 1633651 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1216 06:32:25.251815 1633651 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1216 06:32:25.251827 1633651 command_runner.go:130] > # conmon = ""
	I1216 06:32:25.251832 1633651 command_runner.go:130] > # Cgroup setting for conmon
	I1216 06:32:25.251838 1633651 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1216 06:32:25.251853 1633651 command_runner.go:130] > conmon_cgroup = "pod"
	I1216 06:32:25.251866 1633651 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1216 06:32:25.251872 1633651 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1216 06:32:25.251879 1633651 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1216 06:32:25.251884 1633651 command_runner.go:130] > # conmon_env = [
	I1216 06:32:25.251887 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.251893 1633651 command_runner.go:130] > # Additional environment variables to set for all the
	I1216 06:32:25.251898 1633651 command_runner.go:130] > # containers. These are overridden if set in the
	I1216 06:32:25.251906 1633651 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1216 06:32:25.251910 1633651 command_runner.go:130] > # default_env = [
	I1216 06:32:25.251931 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.251956 1633651 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1216 06:32:25.251970 1633651 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1216 06:32:25.251982 1633651 command_runner.go:130] > # selinux = false
	I1216 06:32:25.251995 1633651 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1216 06:32:25.252003 1633651 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1216 06:32:25.252037 1633651 command_runner.go:130] > # This option supports live configuration reload.
	I1216 06:32:25.252047 1633651 command_runner.go:130] > # seccomp_profile = ""
	I1216 06:32:25.252055 1633651 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1216 06:32:25.252060 1633651 command_runner.go:130] > # This option supports live configuration reload.
	I1216 06:32:25.252066 1633651 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1216 06:32:25.252073 1633651 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1216 06:32:25.252082 1633651 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1216 06:32:25.252088 1633651 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1216 06:32:25.252097 1633651 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1216 06:32:25.252125 1633651 command_runner.go:130] > # This option supports live configuration reload.
	I1216 06:32:25.252136 1633651 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1216 06:32:25.252147 1633651 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1216 06:32:25.252161 1633651 command_runner.go:130] > # the cgroup blockio controller.
	I1216 06:32:25.252165 1633651 command_runner.go:130] > # blockio_config_file = ""
	I1216 06:32:25.252172 1633651 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1216 06:32:25.252176 1633651 command_runner.go:130] > # blockio parameters.
	I1216 06:32:25.252182 1633651 command_runner.go:130] > # blockio_reload = false
	I1216 06:32:25.252207 1633651 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1216 06:32:25.252224 1633651 command_runner.go:130] > # irqbalance daemon.
	I1216 06:32:25.252230 1633651 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1216 06:32:25.252251 1633651 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1216 06:32:25.252260 1633651 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1216 06:32:25.252270 1633651 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1216 06:32:25.252276 1633651 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1216 06:32:25.252283 1633651 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1216 06:32:25.252291 1633651 command_runner.go:130] > # This option supports live configuration reload.
	I1216 06:32:25.252295 1633651 command_runner.go:130] > # rdt_config_file = ""
	I1216 06:32:25.252300 1633651 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1216 06:32:25.252305 1633651 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1216 06:32:25.252321 1633651 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1216 06:32:25.252339 1633651 command_runner.go:130] > # separate_pull_cgroup = ""
	I1216 06:32:25.252356 1633651 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1216 06:32:25.252372 1633651 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1216 06:32:25.252380 1633651 command_runner.go:130] > # will be added.
	I1216 06:32:25.252385 1633651 command_runner.go:130] > # default_capabilities = [
	I1216 06:32:25.252388 1633651 command_runner.go:130] > # 	"CHOWN",
	I1216 06:32:25.252392 1633651 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1216 06:32:25.252405 1633651 command_runner.go:130] > # 	"FSETID",
	I1216 06:32:25.252411 1633651 command_runner.go:130] > # 	"FOWNER",
	I1216 06:32:25.252415 1633651 command_runner.go:130] > # 	"SETGID",
	I1216 06:32:25.252431 1633651 command_runner.go:130] > # 	"SETUID",
	I1216 06:32:25.252493 1633651 command_runner.go:130] > # 	"SETPCAP",
	I1216 06:32:25.252505 1633651 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1216 06:32:25.252509 1633651 command_runner.go:130] > # 	"KILL",
	I1216 06:32:25.252512 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.252520 1633651 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1216 06:32:25.252530 1633651 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1216 06:32:25.252534 1633651 command_runner.go:130] > # add_inheritable_capabilities = false
	I1216 06:32:25.252541 1633651 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1216 06:32:25.252547 1633651 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1216 06:32:25.252564 1633651 command_runner.go:130] > default_sysctls = [
	I1216 06:32:25.252577 1633651 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1216 06:32:25.252581 1633651 command_runner.go:130] > ]
	I1216 06:32:25.252587 1633651 command_runner.go:130] > # List of devices on the host that a
	I1216 06:32:25.252597 1633651 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1216 06:32:25.252601 1633651 command_runner.go:130] > # allowed_devices = [
	I1216 06:32:25.252605 1633651 command_runner.go:130] > # 	"/dev/fuse",
	I1216 06:32:25.252610 1633651 command_runner.go:130] > # 	"/dev/net/tun",
	I1216 06:32:25.252613 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.252624 1633651 command_runner.go:130] > # List of additional devices. specified as
	I1216 06:32:25.252649 1633651 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1216 06:32:25.252661 1633651 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1216 06:32:25.252667 1633651 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1216 06:32:25.252677 1633651 command_runner.go:130] > # additional_devices = [
	I1216 06:32:25.252685 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.252691 1633651 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1216 06:32:25.252703 1633651 command_runner.go:130] > # cdi_spec_dirs = [
	I1216 06:32:25.252716 1633651 command_runner.go:130] > # 	"/etc/cdi",
	I1216 06:32:25.252739 1633651 command_runner.go:130] > # 	"/var/run/cdi",
	I1216 06:32:25.252743 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.252750 1633651 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1216 06:32:25.252759 1633651 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1216 06:32:25.252769 1633651 command_runner.go:130] > # Defaults to false.
	I1216 06:32:25.252779 1633651 command_runner.go:130] > # device_ownership_from_security_context = false
	I1216 06:32:25.252786 1633651 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1216 06:32:25.252792 1633651 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1216 06:32:25.252807 1633651 command_runner.go:130] > # hooks_dir = [
	I1216 06:32:25.252819 1633651 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1216 06:32:25.252823 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.252829 1633651 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1216 06:32:25.252851 1633651 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1216 06:32:25.252857 1633651 command_runner.go:130] > # its default mounts from the following two files:
	I1216 06:32:25.252863 1633651 command_runner.go:130] > #
	I1216 06:32:25.252870 1633651 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1216 06:32:25.252876 1633651 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1216 06:32:25.252882 1633651 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1216 06:32:25.252886 1633651 command_runner.go:130] > #
	I1216 06:32:25.252893 1633651 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1216 06:32:25.252917 1633651 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1216 06:32:25.252940 1633651 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1216 06:32:25.252947 1633651 command_runner.go:130] > #      only add mounts it finds in this file.
	I1216 06:32:25.252950 1633651 command_runner.go:130] > #
	I1216 06:32:25.252955 1633651 command_runner.go:130] > # default_mounts_file = ""
	I1216 06:32:25.252963 1633651 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1216 06:32:25.252970 1633651 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1216 06:32:25.252977 1633651 command_runner.go:130] > # pids_limit = -1
	I1216 06:32:25.252989 1633651 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1216 06:32:25.253005 1633651 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1216 06:32:25.253018 1633651 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1216 06:32:25.253043 1633651 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1216 06:32:25.253055 1633651 command_runner.go:130] > # log_size_max = -1
	I1216 06:32:25.253064 1633651 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1216 06:32:25.253068 1633651 command_runner.go:130] > # log_to_journald = false
	I1216 06:32:25.253080 1633651 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1216 06:32:25.253090 1633651 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1216 06:32:25.253096 1633651 command_runner.go:130] > # Path to directory for container attach sockets.
	I1216 06:32:25.253101 1633651 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1216 06:32:25.253123 1633651 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1216 06:32:25.253128 1633651 command_runner.go:130] > # bind_mount_prefix = ""
	I1216 06:32:25.253151 1633651 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1216 06:32:25.253157 1633651 command_runner.go:130] > # read_only = false
	I1216 06:32:25.253169 1633651 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1216 06:32:25.253183 1633651 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1216 06:32:25.253188 1633651 command_runner.go:130] > # live configuration reload.
	I1216 06:32:25.253196 1633651 command_runner.go:130] > # log_level = "info"
	I1216 06:32:25.253219 1633651 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1216 06:32:25.253232 1633651 command_runner.go:130] > # This option supports live configuration reload.
	I1216 06:32:25.253236 1633651 command_runner.go:130] > # log_filter = ""
	I1216 06:32:25.253252 1633651 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1216 06:32:25.253264 1633651 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1216 06:32:25.253273 1633651 command_runner.go:130] > # separated by comma.
	I1216 06:32:25.253281 1633651 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1216 06:32:25.253287 1633651 command_runner.go:130] > # uid_mappings = ""
	I1216 06:32:25.253293 1633651 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1216 06:32:25.253300 1633651 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1216 06:32:25.253311 1633651 command_runner.go:130] > # separated by comma.
	I1216 06:32:25.253328 1633651 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1216 06:32:25.253340 1633651 command_runner.go:130] > # gid_mappings = ""
	I1216 06:32:25.253346 1633651 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1216 06:32:25.253362 1633651 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1216 06:32:25.253369 1633651 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1216 06:32:25.253377 1633651 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1216 06:32:25.253385 1633651 command_runner.go:130] > # minimum_mappable_uid = -1
	I1216 06:32:25.253391 1633651 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1216 06:32:25.253408 1633651 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1216 06:32:25.253421 1633651 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1216 06:32:25.253438 1633651 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1216 06:32:25.253448 1633651 command_runner.go:130] > # minimum_mappable_gid = -1
	I1216 06:32:25.253459 1633651 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1216 06:32:25.253468 1633651 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1216 06:32:25.253475 1633651 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1216 06:32:25.253481 1633651 command_runner.go:130] > # ctr_stop_timeout = 30
	I1216 06:32:25.253487 1633651 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1216 06:32:25.253493 1633651 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1216 06:32:25.253518 1633651 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1216 06:32:25.253530 1633651 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1216 06:32:25.253541 1633651 command_runner.go:130] > # drop_infra_ctr = true
	I1216 06:32:25.253557 1633651 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1216 06:32:25.253566 1633651 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1216 06:32:25.253573 1633651 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1216 06:32:25.253581 1633651 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1216 06:32:25.253607 1633651 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1216 06:32:25.253614 1633651 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1216 06:32:25.253630 1633651 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1216 06:32:25.253643 1633651 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1216 06:32:25.253647 1633651 command_runner.go:130] > # shared_cpuset = ""
	I1216 06:32:25.253653 1633651 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1216 06:32:25.253666 1633651 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1216 06:32:25.253670 1633651 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1216 06:32:25.253681 1633651 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1216 06:32:25.253688 1633651 command_runner.go:130] > # pinns_path = ""
	I1216 06:32:25.253694 1633651 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1216 06:32:25.253718 1633651 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1216 06:32:25.253731 1633651 command_runner.go:130] > # enable_criu_support = true
	I1216 06:32:25.253736 1633651 command_runner.go:130] > # Enable/disable the generation of the container,
	I1216 06:32:25.253754 1633651 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1216 06:32:25.253764 1633651 command_runner.go:130] > # enable_pod_events = false
	I1216 06:32:25.253771 1633651 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1216 06:32:25.253776 1633651 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1216 06:32:25.253786 1633651 command_runner.go:130] > # default_runtime = "crun"
	I1216 06:32:25.253795 1633651 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1216 06:32:25.253803 1633651 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1216 06:32:25.253814 1633651 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1216 06:32:25.253835 1633651 command_runner.go:130] > # creation as a file is not desired either.
	I1216 06:32:25.253853 1633651 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1216 06:32:25.253868 1633651 command_runner.go:130] > # the hostname is being managed dynamically.
	I1216 06:32:25.253876 1633651 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1216 06:32:25.253879 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.253885 1633651 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1216 06:32:25.253891 1633651 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1216 06:32:25.253923 1633651 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1216 06:32:25.253938 1633651 command_runner.go:130] > # Each entry in the table should follow the format:
	I1216 06:32:25.253941 1633651 command_runner.go:130] > #
	I1216 06:32:25.253946 1633651 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1216 06:32:25.253955 1633651 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1216 06:32:25.253959 1633651 command_runner.go:130] > # runtime_type = "oci"
	I1216 06:32:25.253977 1633651 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1216 06:32:25.253987 1633651 command_runner.go:130] > # inherit_default_runtime = false
	I1216 06:32:25.254007 1633651 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1216 06:32:25.254012 1633651 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1216 06:32:25.254016 1633651 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1216 06:32:25.254020 1633651 command_runner.go:130] > # monitor_env = []
	I1216 06:32:25.254034 1633651 command_runner.go:130] > # privileged_without_host_devices = false
	I1216 06:32:25.254044 1633651 command_runner.go:130] > # allowed_annotations = []
	I1216 06:32:25.254060 1633651 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1216 06:32:25.254072 1633651 command_runner.go:130] > # no_sync_log = false
	I1216 06:32:25.254076 1633651 command_runner.go:130] > # default_annotations = {}
	I1216 06:32:25.254081 1633651 command_runner.go:130] > # stream_websockets = false
	I1216 06:32:25.254088 1633651 command_runner.go:130] > # seccomp_profile = ""
	I1216 06:32:25.254142 1633651 command_runner.go:130] > # Where:
	I1216 06:32:25.254155 1633651 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1216 06:32:25.254162 1633651 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1216 06:32:25.254179 1633651 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1216 06:32:25.254193 1633651 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1216 06:32:25.254197 1633651 command_runner.go:130] > #   in $PATH.
	I1216 06:32:25.254203 1633651 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1216 06:32:25.254216 1633651 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1216 06:32:25.254223 1633651 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1216 06:32:25.254226 1633651 command_runner.go:130] > #   state.
	I1216 06:32:25.254232 1633651 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1216 06:32:25.254254 1633651 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1216 06:32:25.254272 1633651 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1216 06:32:25.254285 1633651 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1216 06:32:25.254290 1633651 command_runner.go:130] > #   the values from the default runtime on load time.
	I1216 06:32:25.254302 1633651 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1216 06:32:25.254311 1633651 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1216 06:32:25.254317 1633651 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1216 06:32:25.254340 1633651 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1216 06:32:25.254347 1633651 command_runner.go:130] > #   The currently recognized values are:
	I1216 06:32:25.254369 1633651 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1216 06:32:25.254378 1633651 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1216 06:32:25.254387 1633651 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1216 06:32:25.254393 1633651 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1216 06:32:25.254405 1633651 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1216 06:32:25.254419 1633651 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1216 06:32:25.254436 1633651 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1216 06:32:25.254450 1633651 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1216 06:32:25.254456 1633651 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1216 06:32:25.254476 1633651 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1216 06:32:25.254491 1633651 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1216 06:32:25.254498 1633651 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1216 06:32:25.254509 1633651 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1216 06:32:25.254520 1633651 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1216 06:32:25.254530 1633651 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1216 06:32:25.254561 1633651 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1216 06:32:25.254585 1633651 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1216 06:32:25.254596 1633651 command_runner.go:130] > #   deprecated option "conmon".
	I1216 06:32:25.254603 1633651 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1216 06:32:25.254613 1633651 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1216 06:32:25.254624 1633651 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1216 06:32:25.254629 1633651 command_runner.go:130] > #   should be moved to the container's cgroup
	I1216 06:32:25.254639 1633651 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1216 06:32:25.254660 1633651 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1216 06:32:25.254668 1633651 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1216 06:32:25.254672 1633651 command_runner.go:130] > #   conmon-rs by using:
	I1216 06:32:25.254689 1633651 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1216 06:32:25.254709 1633651 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1216 06:32:25.254724 1633651 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1216 06:32:25.254731 1633651 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1216 06:32:25.254739 1633651 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1216 06:32:25.254746 1633651 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1216 06:32:25.254767 1633651 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1216 06:32:25.254780 1633651 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1216 06:32:25.254799 1633651 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1216 06:32:25.254817 1633651 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1216 06:32:25.254822 1633651 command_runner.go:130] > #   when a machine crash happens.
	I1216 06:32:25.254829 1633651 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1216 06:32:25.254840 1633651 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1216 06:32:25.254848 1633651 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1216 06:32:25.254855 1633651 command_runner.go:130] > #   seccomp profile for the runtime.
	I1216 06:32:25.254861 1633651 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1216 06:32:25.254884 1633651 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1216 06:32:25.254894 1633651 command_runner.go:130] > #
	I1216 06:32:25.254899 1633651 command_runner.go:130] > # Using the seccomp notifier feature:
	I1216 06:32:25.254902 1633651 command_runner.go:130] > #
	I1216 06:32:25.254922 1633651 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1216 06:32:25.254936 1633651 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1216 06:32:25.254939 1633651 command_runner.go:130] > #
	I1216 06:32:25.254946 1633651 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1216 06:32:25.254954 1633651 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1216 06:32:25.254957 1633651 command_runner.go:130] > #
	I1216 06:32:25.254964 1633651 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1216 06:32:25.254970 1633651 command_runner.go:130] > # feature.
	I1216 06:32:25.254973 1633651 command_runner.go:130] > #
	I1216 06:32:25.254979 1633651 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1216 06:32:25.255001 1633651 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1216 06:32:25.255015 1633651 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1216 06:32:25.255021 1633651 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1216 06:32:25.255037 1633651 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1216 06:32:25.255046 1633651 command_runner.go:130] > #
	I1216 06:32:25.255053 1633651 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1216 06:32:25.255059 1633651 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1216 06:32:25.255065 1633651 command_runner.go:130] > #
	I1216 06:32:25.255071 1633651 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1216 06:32:25.255076 1633651 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1216 06:32:25.255079 1633651 command_runner.go:130] > #
	I1216 06:32:25.255089 1633651 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1216 06:32:25.255098 1633651 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1216 06:32:25.255116 1633651 command_runner.go:130] > # limitation.
	I1216 06:32:25.255127 1633651 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1216 06:32:25.255133 1633651 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1216 06:32:25.255143 1633651 command_runner.go:130] > runtime_type = ""
	I1216 06:32:25.255151 1633651 command_runner.go:130] > runtime_root = "/run/crun"
	I1216 06:32:25.255155 1633651 command_runner.go:130] > inherit_default_runtime = false
	I1216 06:32:25.255165 1633651 command_runner.go:130] > runtime_config_path = ""
	I1216 06:32:25.255174 1633651 command_runner.go:130] > container_min_memory = ""
	I1216 06:32:25.255210 1633651 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1216 06:32:25.255222 1633651 command_runner.go:130] > monitor_cgroup = "pod"
	I1216 06:32:25.255226 1633651 command_runner.go:130] > monitor_exec_cgroup = ""
	I1216 06:32:25.255231 1633651 command_runner.go:130] > allowed_annotations = [
	I1216 06:32:25.255235 1633651 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1216 06:32:25.255238 1633651 command_runner.go:130] > ]
	I1216 06:32:25.255247 1633651 command_runner.go:130] > privileged_without_host_devices = false
	I1216 06:32:25.255251 1633651 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1216 06:32:25.255267 1633651 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1216 06:32:25.255271 1633651 command_runner.go:130] > runtime_type = ""
	I1216 06:32:25.255274 1633651 command_runner.go:130] > runtime_root = "/run/runc"
	I1216 06:32:25.255290 1633651 command_runner.go:130] > inherit_default_runtime = false
	I1216 06:32:25.255300 1633651 command_runner.go:130] > runtime_config_path = ""
	I1216 06:32:25.255305 1633651 command_runner.go:130] > container_min_memory = ""
	I1216 06:32:25.255324 1633651 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1216 06:32:25.255354 1633651 command_runner.go:130] > monitor_cgroup = "pod"
	I1216 06:32:25.255360 1633651 command_runner.go:130] > monitor_exec_cgroup = ""
	I1216 06:32:25.255364 1633651 command_runner.go:130] > privileged_without_host_devices = false
	I1216 06:32:25.255371 1633651 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1216 06:32:25.255376 1633651 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1216 06:32:25.255383 1633651 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1216 06:32:25.255413 1633651 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1216 06:32:25.255438 1633651 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1216 06:32:25.255450 1633651 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1216 06:32:25.255462 1633651 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1216 06:32:25.255468 1633651 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1216 06:32:25.255478 1633651 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1216 06:32:25.255505 1633651 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1216 06:32:25.255522 1633651 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1216 06:32:25.255540 1633651 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1216 06:32:25.255551 1633651 command_runner.go:130] > # Example:
	I1216 06:32:25.255560 1633651 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1216 06:32:25.255569 1633651 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1216 06:32:25.255576 1633651 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1216 06:32:25.255584 1633651 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1216 06:32:25.255587 1633651 command_runner.go:130] > # cpuset = "0-1"
	I1216 06:32:25.255591 1633651 command_runner.go:130] > # cpushares = "5"
	I1216 06:32:25.255595 1633651 command_runner.go:130] > # cpuquota = "1000"
	I1216 06:32:25.255625 1633651 command_runner.go:130] > # cpuperiod = "100000"
	I1216 06:32:25.255636 1633651 command_runner.go:130] > # cpulimit = "35"
	I1216 06:32:25.255640 1633651 command_runner.go:130] > # Where:
	I1216 06:32:25.255645 1633651 command_runner.go:130] > # The workload name is workload-type.
	I1216 06:32:25.255652 1633651 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1216 06:32:25.255661 1633651 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1216 06:32:25.255667 1633651 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1216 06:32:25.255678 1633651 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1216 06:32:25.255686 1633651 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1216 06:32:25.255715 1633651 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1216 06:32:25.255733 1633651 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1216 06:32:25.255738 1633651 command_runner.go:130] > # Default value is set to true
	I1216 06:32:25.255749 1633651 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1216 06:32:25.255755 1633651 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1216 06:32:25.255760 1633651 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1216 06:32:25.255767 1633651 command_runner.go:130] > # Default value is set to 'false'
	I1216 06:32:25.255771 1633651 command_runner.go:130] > # disable_hostport_mapping = false
	I1216 06:32:25.255776 1633651 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1216 06:32:25.255807 1633651 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1216 06:32:25.255817 1633651 command_runner.go:130] > # timezone = ""
	I1216 06:32:25.255824 1633651 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1216 06:32:25.255830 1633651 command_runner.go:130] > #
	I1216 06:32:25.255836 1633651 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1216 06:32:25.255846 1633651 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1216 06:32:25.255850 1633651 command_runner.go:130] > [crio.image]
	I1216 06:32:25.255856 1633651 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1216 06:32:25.255866 1633651 command_runner.go:130] > # default_transport = "docker://"
	I1216 06:32:25.255888 1633651 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1216 06:32:25.255905 1633651 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1216 06:32:25.255915 1633651 command_runner.go:130] > # global_auth_file = ""
	I1216 06:32:25.255920 1633651 command_runner.go:130] > # The image used to instantiate infra containers.
	I1216 06:32:25.255925 1633651 command_runner.go:130] > # This option supports live configuration reload.
	I1216 06:32:25.255931 1633651 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1216 06:32:25.255940 1633651 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1216 06:32:25.255955 1633651 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1216 06:32:25.255961 1633651 command_runner.go:130] > # This option supports live configuration reload.
	I1216 06:32:25.255968 1633651 command_runner.go:130] > # pause_image_auth_file = ""
	I1216 06:32:25.255989 1633651 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1216 06:32:25.255997 1633651 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1216 06:32:25.256008 1633651 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1216 06:32:25.256014 1633651 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1216 06:32:25.256020 1633651 command_runner.go:130] > # pause_command = "/pause"
	I1216 06:32:25.256026 1633651 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1216 06:32:25.256032 1633651 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1216 06:32:25.256042 1633651 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1216 06:32:25.256057 1633651 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1216 06:32:25.256069 1633651 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1216 06:32:25.256085 1633651 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1216 06:32:25.256096 1633651 command_runner.go:130] > # pinned_images = [
	I1216 06:32:25.256100 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.256106 1633651 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1216 06:32:25.256116 1633651 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1216 06:32:25.256122 1633651 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1216 06:32:25.256131 1633651 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1216 06:32:25.256139 1633651 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1216 06:32:25.256144 1633651 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1216 06:32:25.256150 1633651 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1216 06:32:25.256179 1633651 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1216 06:32:25.256192 1633651 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1216 06:32:25.256207 1633651 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1216 06:32:25.256217 1633651 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1216 06:32:25.256222 1633651 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1216 06:32:25.256229 1633651 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1216 06:32:25.256238 1633651 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1216 06:32:25.256242 1633651 command_runner.go:130] > # changing them here.
	I1216 06:32:25.256266 1633651 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1216 06:32:25.256283 1633651 command_runner.go:130] > # insecure_registries = [
	I1216 06:32:25.256293 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.256303 1633651 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1216 06:32:25.256311 1633651 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1216 06:32:25.256321 1633651 command_runner.go:130] > # image_volumes = "mkdir"
	I1216 06:32:25.256331 1633651 command_runner.go:130] > # Temporary directory to use for storing big files
	I1216 06:32:25.256347 1633651 command_runner.go:130] > # big_files_temporary_dir = ""
	I1216 06:32:25.256360 1633651 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1216 06:32:25.256372 1633651 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1216 06:32:25.256380 1633651 command_runner.go:130] > # auto_reload_registries = false
	I1216 06:32:25.256386 1633651 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1216 06:32:25.256395 1633651 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1216 06:32:25.256404 1633651 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1216 06:32:25.256408 1633651 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1216 06:32:25.256422 1633651 command_runner.go:130] > # The mode of short name resolution.
	I1216 06:32:25.256436 1633651 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1216 06:32:25.256452 1633651 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1216 06:32:25.256479 1633651 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1216 06:32:25.256484 1633651 command_runner.go:130] > # short_name_mode = "enforcing"
	I1216 06:32:25.256490 1633651 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1216 06:32:25.256497 1633651 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1216 06:32:25.256512 1633651 command_runner.go:130] > # oci_artifact_mount_support = true
	I1216 06:32:25.256532 1633651 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1216 06:32:25.256544 1633651 command_runner.go:130] > # CNI plugins.
	I1216 06:32:25.256548 1633651 command_runner.go:130] > [crio.network]
	I1216 06:32:25.256566 1633651 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1216 06:32:25.256583 1633651 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1216 06:32:25.256590 1633651 command_runner.go:130] > # cni_default_network = ""
	I1216 06:32:25.256596 1633651 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1216 06:32:25.256603 1633651 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1216 06:32:25.256610 1633651 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1216 06:32:25.256626 1633651 command_runner.go:130] > # plugin_dirs = [
	I1216 06:32:25.256650 1633651 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1216 06:32:25.256654 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.256678 1633651 command_runner.go:130] > # List of included pod metrics.
	I1216 06:32:25.256691 1633651 command_runner.go:130] > # included_pod_metrics = [
	I1216 06:32:25.256695 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.256701 1633651 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1216 06:32:25.256708 1633651 command_runner.go:130] > [crio.metrics]
	I1216 06:32:25.256712 1633651 command_runner.go:130] > # Globally enable or disable metrics support.
	I1216 06:32:25.256717 1633651 command_runner.go:130] > # enable_metrics = false
	I1216 06:32:25.256723 1633651 command_runner.go:130] > # Specify enabled metrics collectors.
	I1216 06:32:25.256728 1633651 command_runner.go:130] > # Per default all metrics are enabled.
	I1216 06:32:25.256737 1633651 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1216 06:32:25.256762 1633651 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1216 06:32:25.256774 1633651 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1216 06:32:25.256778 1633651 command_runner.go:130] > # metrics_collectors = [
	I1216 06:32:25.256799 1633651 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1216 06:32:25.256808 1633651 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1216 06:32:25.256813 1633651 command_runner.go:130] > # 	"containers_oom_total",
	I1216 06:32:25.256818 1633651 command_runner.go:130] > # 	"processes_defunct",
	I1216 06:32:25.256829 1633651 command_runner.go:130] > # 	"operations_total",
	I1216 06:32:25.256834 1633651 command_runner.go:130] > # 	"operations_latency_seconds",
	I1216 06:32:25.256839 1633651 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1216 06:32:25.256842 1633651 command_runner.go:130] > # 	"operations_errors_total",
	I1216 06:32:25.256847 1633651 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1216 06:32:25.256851 1633651 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1216 06:32:25.256855 1633651 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1216 06:32:25.256869 1633651 command_runner.go:130] > # 	"image_pulls_success_total",
	I1216 06:32:25.256888 1633651 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1216 06:32:25.256897 1633651 command_runner.go:130] > # 	"containers_oom_count_total",
	I1216 06:32:25.256901 1633651 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1216 06:32:25.256906 1633651 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1216 06:32:25.256913 1633651 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1216 06:32:25.256916 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.256923 1633651 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1216 06:32:25.256930 1633651 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1216 06:32:25.256944 1633651 command_runner.go:130] > # The port on which the metrics server will listen.
	I1216 06:32:25.256952 1633651 command_runner.go:130] > # metrics_port = 9090
	I1216 06:32:25.256958 1633651 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1216 06:32:25.256967 1633651 command_runner.go:130] > # metrics_socket = ""
	I1216 06:32:25.256972 1633651 command_runner.go:130] > # The certificate for the secure metrics server.
	I1216 06:32:25.256979 1633651 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1216 06:32:25.256987 1633651 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1216 06:32:25.257000 1633651 command_runner.go:130] > # certificate on any modification event.
	I1216 06:32:25.257004 1633651 command_runner.go:130] > # metrics_cert = ""
	I1216 06:32:25.257023 1633651 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1216 06:32:25.257034 1633651 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1216 06:32:25.257039 1633651 command_runner.go:130] > # metrics_key = ""
	I1216 06:32:25.257061 1633651 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1216 06:32:25.257070 1633651 command_runner.go:130] > [crio.tracing]
	I1216 06:32:25.257076 1633651 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1216 06:32:25.257080 1633651 command_runner.go:130] > # enable_tracing = false
	I1216 06:32:25.257088 1633651 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1216 06:32:25.257099 1633651 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1216 06:32:25.257111 1633651 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1216 06:32:25.257127 1633651 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1216 06:32:25.257138 1633651 command_runner.go:130] > # CRI-O NRI configuration.
	I1216 06:32:25.257142 1633651 command_runner.go:130] > [crio.nri]
	I1216 06:32:25.257156 1633651 command_runner.go:130] > # Globally enable or disable NRI.
	I1216 06:32:25.257167 1633651 command_runner.go:130] > # enable_nri = true
	I1216 06:32:25.257172 1633651 command_runner.go:130] > # NRI socket to listen on.
	I1216 06:32:25.257181 1633651 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1216 06:32:25.257193 1633651 command_runner.go:130] > # NRI plugin directory to use.
	I1216 06:32:25.257198 1633651 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1216 06:32:25.257205 1633651 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1216 06:32:25.257210 1633651 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1216 06:32:25.257218 1633651 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1216 06:32:25.257323 1633651 command_runner.go:130] > # nri_disable_connections = false
	I1216 06:32:25.257337 1633651 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1216 06:32:25.257342 1633651 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1216 06:32:25.257358 1633651 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1216 06:32:25.257370 1633651 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1216 06:32:25.257375 1633651 command_runner.go:130] > # NRI default validator configuration.
	I1216 06:32:25.257383 1633651 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1216 06:32:25.257393 1633651 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1216 06:32:25.257397 1633651 command_runner.go:130] > # can be restricted/rejected:
	I1216 06:32:25.257403 1633651 command_runner.go:130] > # - OCI hook injection
	I1216 06:32:25.257409 1633651 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1216 06:32:25.257417 1633651 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1216 06:32:25.257431 1633651 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1216 06:32:25.257443 1633651 command_runner.go:130] > # - adjustment of linux namespaces
	I1216 06:32:25.257465 1633651 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1216 06:32:25.257479 1633651 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1216 06:32:25.257485 1633651 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1216 06:32:25.257493 1633651 command_runner.go:130] > #
	I1216 06:32:25.257498 1633651 command_runner.go:130] > # [crio.nri.default_validator]
	I1216 06:32:25.257503 1633651 command_runner.go:130] > # nri_enable_default_validator = false
	I1216 06:32:25.257510 1633651 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1216 06:32:25.257516 1633651 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1216 06:32:25.257522 1633651 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1216 06:32:25.257549 1633651 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1216 06:32:25.257562 1633651 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1216 06:32:25.257568 1633651 command_runner.go:130] > # nri_validator_required_plugins = [
	I1216 06:32:25.257574 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.257593 1633651 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1216 06:32:25.257604 1633651 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1216 06:32:25.257609 1633651 command_runner.go:130] > [crio.stats]
	I1216 06:32:25.257639 1633651 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1216 06:32:25.257651 1633651 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1216 06:32:25.257655 1633651 command_runner.go:130] > # stats_collection_period = 0
	I1216 06:32:25.257662 1633651 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1216 06:32:25.257671 1633651 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1216 06:32:25.257675 1633651 command_runner.go:130] > # collection_period = 0
	I1216 06:32:25.259482 1633651 command_runner.go:130] ! time="2025-12-16T06:32:25.219727326Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1216 06:32:25.259512 1633651 command_runner.go:130] ! time="2025-12-16T06:32:25.219767515Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1216 06:32:25.259524 1633651 command_runner.go:130] ! time="2025-12-16T06:32:25.219798038Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1216 06:32:25.259536 1633651 command_runner.go:130] ! time="2025-12-16T06:32:25.219823548Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1216 06:32:25.259545 1633651 command_runner.go:130] ! time="2025-12-16T06:32:25.219901653Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:32:25.259556 1633651 command_runner.go:130] ! time="2025-12-16T06:32:25.220263616Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1216 06:32:25.259571 1633651 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1216 06:32:25.260036 1633651 cni.go:84] Creating CNI manager for ""
	I1216 06:32:25.260064 1633651 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 06:32:25.260092 1633651 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 06:32:25.260122 1633651 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-364120 NodeName:functional-364120 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 06:32:25.260297 1633651 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-364120"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 06:32:25.260383 1633651 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 06:32:25.268343 1633651 command_runner.go:130] > kubeadm
	I1216 06:32:25.268362 1633651 command_runner.go:130] > kubectl
	I1216 06:32:25.268366 1633651 command_runner.go:130] > kubelet
	I1216 06:32:25.268406 1633651 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 06:32:25.268462 1633651 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 06:32:25.276071 1633651 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1216 06:32:25.288575 1633651 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 06:32:25.300994 1633651 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1216 06:32:25.313670 1633651 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 06:32:25.317448 1633651 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1216 06:32:25.317550 1633651 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:32:25.453328 1633651 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:32:26.148228 1633651 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120 for IP: 192.168.49.2
	I1216 06:32:26.148252 1633651 certs.go:195] generating shared ca certs ...
	I1216 06:32:26.148269 1633651 certs.go:227] acquiring lock for ca certs: {Name:mkbf72d2e438185e2867d262e148d82e5455cccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:32:26.148410 1633651 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key
	I1216 06:32:26.148482 1633651 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key
	I1216 06:32:26.148493 1633651 certs.go:257] generating profile certs ...
	I1216 06:32:26.148601 1633651 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.key
	I1216 06:32:26.148663 1633651 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.key.a6be103a
	I1216 06:32:26.148727 1633651 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.key
	I1216 06:32:26.148740 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1216 06:32:26.148753 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1216 06:32:26.148765 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1216 06:32:26.148785 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1216 06:32:26.148802 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1216 06:32:26.148814 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1216 06:32:26.148830 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1216 06:32:26.148841 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1216 06:32:26.148892 1633651 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem (1338 bytes)
	W1216 06:32:26.148927 1633651 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255_empty.pem, impossibly tiny 0 bytes
	I1216 06:32:26.148935 1633651 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 06:32:26.148966 1633651 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem (1078 bytes)
	I1216 06:32:26.148996 1633651 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem (1123 bytes)
	I1216 06:32:26.149023 1633651 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem (1675 bytes)
	I1216 06:32:26.149078 1633651 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 06:32:26.149109 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> /usr/share/ca-certificates/15992552.pem
	I1216 06:32:26.149127 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:32:26.149143 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem -> /usr/share/ca-certificates/1599255.pem
	I1216 06:32:26.149727 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 06:32:26.167732 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 06:32:26.185872 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 06:32:26.203036 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 06:32:26.220347 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 06:32:26.238248 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 06:32:26.255572 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 06:32:26.272719 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 06:32:26.290975 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /usr/share/ca-certificates/15992552.pem (1708 bytes)
	I1216 06:32:26.308752 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 06:32:26.326261 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem --> /usr/share/ca-certificates/1599255.pem (1338 bytes)
	I1216 06:32:26.344085 1633651 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 06:32:26.357043 1633651 ssh_runner.go:195] Run: openssl version
	I1216 06:32:26.362895 1633651 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1216 06:32:26.363366 1633651 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15992552.pem
	I1216 06:32:26.370980 1633651 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15992552.pem /etc/ssl/certs/15992552.pem
	I1216 06:32:26.378519 1633651 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15992552.pem
	I1216 06:32:26.382213 1633651 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 16 06:24 /usr/share/ca-certificates/15992552.pem
	I1216 06:32:26.382261 1633651 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 06:24 /usr/share/ca-certificates/15992552.pem
	I1216 06:32:26.382313 1633651 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15992552.pem
	I1216 06:32:26.422786 1633651 command_runner.go:130] > 3ec20f2e
	I1216 06:32:26.423247 1633651 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 06:32:26.430703 1633651 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:32:26.437977 1633651 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 06:32:26.445376 1633651 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:32:26.449306 1633651 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 16 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:32:26.449352 1633651 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:32:26.449400 1633651 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:32:26.489732 1633651 command_runner.go:130] > b5213941
	I1216 06:32:26.490221 1633651 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 06:32:26.498231 1633651 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1599255.pem
	I1216 06:32:26.505778 1633651 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1599255.pem /etc/ssl/certs/1599255.pem
	I1216 06:32:26.513624 1633651 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1599255.pem
	I1216 06:32:26.517603 1633651 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 16 06:24 /usr/share/ca-certificates/1599255.pem
	I1216 06:32:26.517655 1633651 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 06:24 /usr/share/ca-certificates/1599255.pem
	I1216 06:32:26.517708 1633651 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1599255.pem
	I1216 06:32:26.558501 1633651 command_runner.go:130] > 51391683
	I1216 06:32:26.558962 1633651 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 06:32:26.566709 1633651 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 06:32:26.570687 1633651 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 06:32:26.570714 1633651 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1216 06:32:26.570721 1633651 command_runner.go:130] > Device: 259,1	Inode: 1064557     Links: 1
	I1216 06:32:26.570728 1633651 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1216 06:32:26.570734 1633651 command_runner.go:130] > Access: 2025-12-16 06:28:17.989070314 +0000
	I1216 06:32:26.570739 1633651 command_runner.go:130] > Modify: 2025-12-16 06:24:14.133380006 +0000
	I1216 06:32:26.570745 1633651 command_runner.go:130] > Change: 2025-12-16 06:24:14.133380006 +0000
	I1216 06:32:26.570750 1633651 command_runner.go:130] >  Birth: 2025-12-16 06:24:14.133380006 +0000
	I1216 06:32:26.570807 1633651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 06:32:26.611178 1633651 command_runner.go:130] > Certificate will not expire
	I1216 06:32:26.611643 1633651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 06:32:26.653044 1633651 command_runner.go:130] > Certificate will not expire
	I1216 06:32:26.653496 1633651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 06:32:26.693948 1633651 command_runner.go:130] > Certificate will not expire
	I1216 06:32:26.694452 1633651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 06:32:26.737177 1633651 command_runner.go:130] > Certificate will not expire
	I1216 06:32:26.737685 1633651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 06:32:26.777863 1633651 command_runner.go:130] > Certificate will not expire
	I1216 06:32:26.778315 1633651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 06:32:26.821770 1633651 command_runner.go:130] > Certificate will not expire
	I1216 06:32:26.822198 1633651 kubeadm.go:401] StartCluster: {Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:32:26.822282 1633651 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 06:32:26.822342 1633651 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 06:32:26.848560 1633651 cri.go:89] found id: ""
	I1216 06:32:26.848631 1633651 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 06:32:26.856311 1633651 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1216 06:32:26.856334 1633651 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1216 06:32:26.856341 1633651 command_runner.go:130] > /var/lib/minikube/etcd:
	I1216 06:32:26.856353 1633651 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 06:32:26.856377 1633651 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 06:32:26.856451 1633651 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 06:32:26.863716 1633651 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 06:32:26.864139 1633651 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-364120" does not appear in /home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:32:26.864257 1633651 kubeconfig.go:62] /home/jenkins/minikube-integration/22141-1596013/kubeconfig needs updating (will repair): [kubeconfig missing "functional-364120" cluster setting kubeconfig missing "functional-364120" context setting]
	I1216 06:32:26.864570 1633651 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/kubeconfig: {Name:mk61a8e87d869d27c5acc78145bae6b02a8088a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:32:26.865235 1633651 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:32:26.865467 1633651 kapi.go:59] client config for functional-364120: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.crt", KeyFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.key", CAFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 06:32:26.866570 1633651 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1216 06:32:26.866631 1633651 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1216 06:32:26.866668 1633651 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 06:32:26.866693 1633651 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1216 06:32:26.866720 1633651 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 06:32:26.867179 1633651 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 06:32:26.868151 1633651 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1216 06:32:26.877051 1633651 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1216 06:32:26.877090 1633651 kubeadm.go:602] duration metric: took 20.700092ms to restartPrimaryControlPlane
	I1216 06:32:26.877101 1633651 kubeadm.go:403] duration metric: took 54.908954ms to StartCluster
	I1216 06:32:26.877118 1633651 settings.go:142] acquiring lock: {Name:mk011eec7aa10b3db81dce3dc7edf51f985e2ce2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:32:26.877187 1633651 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:32:26.877859 1633651 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/kubeconfig: {Name:mk61a8e87d869d27c5acc78145bae6b02a8088a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:32:26.878064 1633651 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 06:32:26.878625 1633651 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 06:32:26.878682 1633651 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 06:32:26.878749 1633651 addons.go:70] Setting storage-provisioner=true in profile "functional-364120"
	I1216 06:32:26.878762 1633651 addons.go:239] Setting addon storage-provisioner=true in "functional-364120"
	I1216 06:32:26.878787 1633651 host.go:66] Checking if "functional-364120" exists ...
	I1216 06:32:26.879288 1633651 cli_runner.go:164] Run: docker container inspect functional-364120 --format={{.State.Status}}
	I1216 06:32:26.879473 1633651 addons.go:70] Setting default-storageclass=true in profile "functional-364120"
	I1216 06:32:26.879497 1633651 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-364120"
	I1216 06:32:26.879803 1633651 cli_runner.go:164] Run: docker container inspect functional-364120 --format={{.State.Status}}
	I1216 06:32:26.884633 1633651 out.go:179] * Verifying Kubernetes components...
	I1216 06:32:26.887314 1633651 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:32:26.918200 1633651 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 06:32:26.919874 1633651 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:32:26.920155 1633651 kapi.go:59] client config for functional-364120: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.crt", KeyFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.key", CAFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 06:32:26.920453 1633651 addons.go:239] Setting addon default-storageclass=true in "functional-364120"
	I1216 06:32:26.920538 1633651 host.go:66] Checking if "functional-364120" exists ...
	I1216 06:32:26.920986 1633651 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:26.921004 1633651 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 06:32:26.921061 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:26.921340 1633651 cli_runner.go:164] Run: docker container inspect functional-364120 --format={{.State.Status}}
	I1216 06:32:26.964659 1633651 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:26.964697 1633651 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 06:32:26.964756 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:26.965286 1633651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:32:26.998084 1633651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:32:27.098293 1633651 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:32:27.125997 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:27.132422 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:27.897996 1633651 node_ready.go:35] waiting up to 6m0s for node "functional-364120" to be "Ready" ...
	I1216 06:32:27.898129 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:27.898194 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:27.898417 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:27.898455 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:27.898484 1633651 retry.go:31] will retry after 293.203887ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:27.898523 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:27.898548 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:27.898555 1633651 retry.go:31] will retry after 361.667439ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:27.898617 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:28.192028 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:28.251245 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:28.251292 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:28.251318 1633651 retry.go:31] will retry after 421.770055ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:28.261399 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:28.326104 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:28.326166 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:28.326190 1633651 retry.go:31] will retry after 230.03946ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:28.398272 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:28.398369 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:28.398664 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:28.557150 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:28.610627 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:28.614370 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:28.614405 1633651 retry.go:31] will retry after 431.515922ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:28.673577 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:28.751124 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:28.751167 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:28.751187 1633651 retry.go:31] will retry after 416.921651ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:28.898406 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:28.898526 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:28.898876 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:29.046157 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:29.107254 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:29.107314 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:29.107371 1633651 retry.go:31] will retry after 899.303578ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:29.168518 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:29.225793 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:29.229337 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:29.229371 1633651 retry.go:31] will retry after 758.152445ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:29.398643 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:29.398767 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:29.399082 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:29.898862 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:29.898939 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:29.899317 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:29.899390 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:29.988648 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:30.011610 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:30.113177 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:30.113245 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:30.113269 1633651 retry.go:31] will retry after 739.984539ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:30.134431 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:30.134488 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:30.134525 1633651 retry.go:31] will retry after 743.078754ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:30.398873 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:30.398944 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:30.399345 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:30.854128 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:30.878717 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:30.899202 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:30.899283 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:30.899567 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:30.948589 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:30.948629 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:30.948651 1633651 retry.go:31] will retry after 2.54132752s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:30.989038 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:30.989082 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:30.989107 1633651 retry.go:31] will retry after 1.925489798s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:31.398656 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:31.398729 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:31.399083 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:31.898637 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:31.898714 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:31.899058 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:32.398954 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:32.399038 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:32.399384 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:32.399469 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:32.898198 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:32.898298 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:32.898691 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:32.914948 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:32.974729 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:32.974766 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:32.974784 1633651 retry.go:31] will retry after 2.13279976s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:33.398213 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:33.398308 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:33.398682 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:33.491042 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:33.546485 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:33.550699 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:33.550734 1633651 retry.go:31] will retry after 1.927615537s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:33.899219 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:33.899329 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:33.899638 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:34.398293 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:34.398367 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:34.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:34.898296 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:34.898376 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:34.898683 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:34.898732 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:35.108136 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:35.168080 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:35.168179 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:35.168237 1633651 retry.go:31] will retry after 2.609957821s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:35.398216 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:35.398310 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:35.398589 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:35.478854 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:35.539410 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:35.539453 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:35.539472 1633651 retry.go:31] will retry after 2.66810674s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:35.898940 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:35.899019 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:35.899395 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:36.399231 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:36.399312 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:36.399638 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:36.898470 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:36.898542 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:36.898811 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:36.898864 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:37.398807 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:37.398884 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:37.399243 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:37.778747 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:37.833515 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:37.837237 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:37.837278 1633651 retry.go:31] will retry after 4.537651284s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:37.898560 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:37.898639 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:37.898976 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:38.208455 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:38.268308 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:38.268354 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:38.268373 1633651 retry.go:31] will retry after 8.612374195s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:38.398733 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:38.398807 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:38.399077 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:38.899000 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:38.899085 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:38.899556 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:38.899628 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:39.398306 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:39.398389 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:39.398769 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:39.898353 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:39.898421 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:39.898737 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:40.398303 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:40.398378 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:40.398718 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:40.898499 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:40.898578 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:40.898878 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:41.398243 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:41.398320 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:41.398608 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:41.398654 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:41.898265 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:41.898352 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:41.898706 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:42.375464 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:42.399185 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:42.399260 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:42.399531 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:42.439480 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:42.439520 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:42.439538 1633651 retry.go:31] will retry after 13.723834965s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:42.899110 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:42.899183 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:42.899457 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:43.398171 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:43.398246 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:43.398594 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:43.898302 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:43.898384 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:43.898716 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:43.898766 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:44.398246 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:44.398336 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:44.398652 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:44.898379 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:44.898453 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:44.898773 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:45.398303 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:45.398383 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:45.398795 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:45.898225 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:45.898296 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:45.898604 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:46.398309 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:46.398384 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:46.398732 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:46.398787 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:46.881536 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:46.898964 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:46.899056 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:46.899361 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:46.940375 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:46.943961 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:46.943995 1633651 retry.go:31] will retry after 5.072276608s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:47.398701 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:47.398787 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:47.399064 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:47.898839 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:47.898914 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:47.899236 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:48.398915 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:48.398993 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:48.399340 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:48.399397 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:48.898996 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:48.899069 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:48.899401 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:49.399214 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:49.399301 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:49.399707 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:49.898281 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:49.898365 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:49.898709 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:50.398392 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:50.398466 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:50.398735 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:50.898279 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:50.898378 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:50.898713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:50.898770 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:51.398286 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:51.398367 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:51.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:51.898253 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:51.898327 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:51.898592 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:52.017198 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:52.080330 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:52.080367 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:52.080387 1633651 retry.go:31] will retry after 19.488213597s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:52.398170 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:52.398254 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:52.398603 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:52.898357 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:52.898430 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:52.898751 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:52.898809 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:53.398443 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:53.398509 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:53.398780 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:53.898306 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:53.898387 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:53.898746 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:54.398455 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:54.398531 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:54.398859 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:54.898536 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:54.898616 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:54.898937 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:54.899000 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:55.398275 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:55.398355 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:55.398711 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:55.898280 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:55.898356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:55.898712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:56.164267 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:56.225232 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:56.225280 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:56.225300 1633651 retry.go:31] will retry after 14.108855756s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:56.398529 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:56.398594 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:56.398865 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:56.898855 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:56.898932 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:56.899282 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:56.899334 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:57.399213 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:57.399288 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:57.399591 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:57.898226 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:57.898296 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:57.898568 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:58.398287 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:58.398378 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:58.398747 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:58.898457 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:58.898545 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:58.898936 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:59.398231 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:59.398328 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:59.398650 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:59.398702 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:59.898313 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:59.898388 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:59.898742 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:00.398460 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:00.398541 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:00.398851 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:00.898739 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:00.898816 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:00.899097 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:01.398863 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:01.398936 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:01.399252 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:01.399305 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:01.898923 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:01.899005 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:01.899364 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:02.399175 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:02.399247 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:02.399610 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:02.898189 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:02.898266 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:02.898584 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:03.398333 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:03.398410 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:03.398779 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:03.898460 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:03.898527 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:03.898800 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:03.898847 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:04.398287 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:04.398376 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:04.398745 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:04.898458 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:04.898534 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:04.898848 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:05.398531 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:05.398614 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:05.398881 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:05.898633 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:05.898709 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:05.899055 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:05.899137 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:06.398909 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:06.398987 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:06.399357 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:06.898176 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:06.898262 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:06.898675 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:07.398306 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:07.398386 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:07.398760 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:07.898344 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:07.898420 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:07.898721 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:08.398282 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:08.398349 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:08.398667 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:08.398725 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:08.898267 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:08.898349 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:08.898696 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:09.398398 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:09.398479 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:09.398785 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:09.898336 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:09.898404 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:09.898666 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:10.335122 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:33:10.396460 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:33:10.396519 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:33:10.396538 1633651 retry.go:31] will retry after 12.344116424s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:33:10.398561 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:10.398627 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:10.398890 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:10.398937 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:10.898605 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:10.898693 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:10.899053 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:11.398802 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:11.398885 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:11.399176 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:11.569711 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:33:11.631078 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:33:11.634606 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:33:11.634637 1633651 retry.go:31] will retry after 14.712851021s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:33:11.899031 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:11.899113 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:11.899432 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:12.398254 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:12.398360 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:12.398690 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:12.898240 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:12.898312 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:12.898566 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:12.898607 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:13.398287 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:13.398402 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:13.398698 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:13.898274 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:13.898358 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:13.898708 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:14.398404 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:14.398483 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:14.398747 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:14.898318 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:14.898393 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:14.898689 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:14.898742 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:15.398247 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:15.398323 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:15.398677 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:15.898226 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:15.898327 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:15.898637 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:16.398242 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:16.398318 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:16.398644 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:16.898635 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:16.898716 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:16.899100 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:16.899164 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:17.398918 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:17.399005 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:17.399287 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:17.899071 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:17.899230 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:17.899613 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:18.398204 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:18.398291 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:18.398652 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:18.898350 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:18.898425 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:18.898684 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:19.398280 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:19.398356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:19.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:19.398764 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:19.898239 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:19.898318 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:19.898648 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:20.398232 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:20.398306 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:20.398616 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:20.898284 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:20.898360 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:20.898678 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:21.398294 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:21.398375 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:21.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:21.898275 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:21.898388 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:21.898665 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:21.898722 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:22.398602 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:22.398676 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:22.399053 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:22.741700 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:33:22.805176 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:33:22.805212 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:33:22.805230 1633651 retry.go:31] will retry after 37.521073757s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:33:22.898475 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:22.898570 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:22.898876 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:23.398233 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:23.398311 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:23.398648 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:23.898274 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:23.898357 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:23.898694 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:23.898753 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:24.398440 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:24.398517 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:24.398859 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:24.898547 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:24.898618 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:24.898926 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:25.398269 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:25.398343 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:25.398672 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:25.898264 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:25.898341 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:25.898639 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:26.348396 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:33:26.398844 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:26.398921 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:26.399279 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:26.399329 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:26.417393 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:33:26.417436 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:33:26.417455 1633651 retry.go:31] will retry after 31.35447413s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:33:26.898149 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:26.898223 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:26.898585 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:27.398341 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:27.398414 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:27.398760 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:27.898330 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:27.898422 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:27.898845 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:28.398266 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:28.398345 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:28.398712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:28.898417 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:28.898496 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:28.898819 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:28.898872 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:29.398235 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:29.398307 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:29.398632 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:29.898239 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:29.898320 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:29.898683 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:30.398392 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:30.398475 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:30.398830 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:30.898474 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:30.898549 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:30.898811 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:31.398256 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:31.398330 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:31.398672 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:31.398725 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:31.898251 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:31.898324 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:31.898636 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:32.398372 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:32.398442 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:32.398727 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:32.898400 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:32.898485 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:32.898850 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:33.398289 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:33.398371 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:33.398711 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:33.398769 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:33.898438 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:33.898505 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:33.898773 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:34.398438 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:34.398516 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:34.398867 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:34.898456 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:34.898537 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:34.898909 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:35.398591 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:35.398658 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:35.398916 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:35.398977 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:35.898278 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:35.898358 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:35.898703 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:36.398279 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:36.398364 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:36.398729 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:36.898728 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:36.898803 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:36.899137 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:37.399202 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:37.399278 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:37.399639 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:37.399694 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:37.898374 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:37.898455 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:37.898821 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:38.398505 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:38.398571 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:38.398855 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:38.898265 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:38.898344 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:38.898677 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:39.398411 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:39.398486 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:39.398839 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:39.898222 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:39.898300 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:39.898615 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:39.898667 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:40.398263 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:40.398339 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:40.398681 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:40.898277 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:40.898359 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:40.898740 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:41.398462 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:41.398529 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:41.398809 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:41.898281 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:41.898354 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:41.898706 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:41.898766 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:42.398755 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:42.398839 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:42.399236 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:42.898983 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:42.899053 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:42.899331 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:43.398183 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:43.398258 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:43.398591 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:43.898308 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:43.898391 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:43.898742 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:44.398253 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:44.398321 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:44.398580 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:44.398622 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:44.898252 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:44.898325 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:44.898659 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:45.398342 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:45.398448 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:45.398787 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:45.898225 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:45.898296 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:45.898628 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:46.398273 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:46.398350 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:46.398686 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:46.398739 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:46.898513 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:46.898594 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:46.898959 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:47.398772 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:47.398859 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:47.399168 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:47.898938 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:47.899012 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:47.899377 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:48.399044 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:48.399126 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:48.399458 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:48.399514 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:48.898185 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:48.898255 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:48.898520 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:49.398231 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:49.398311 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:49.398630 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:49.898360 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:49.898434 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:49.898761 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:50.398248 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:50.398333 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:50.398656 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:50.898256 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:50.898329 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:50.898694 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:50.898756 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:51.398426 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:51.398503 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:51.398913 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:51.898663 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:51.898743 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:51.899196 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:52.398565 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:52.398648 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:52.399111 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:52.898692 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:52.898773 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:52.899132 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:52.899190 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:53.398951 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:53.399065 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:53.399370 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:53.898173 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:53.898248 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:53.898623 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:54.398283 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:54.398370 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:54.398682 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:54.898239 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:54.898312 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:54.898573 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:55.398246 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:55.398320 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:55.398650 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:55.398707 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:55.898264 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:55.898343 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:55.898683 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:56.398258 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:56.398333 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:56.398586 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:56.898628 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:56.898703 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:56.899073 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:57.398945 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:57.399019 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:57.399371 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:57.399427 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:57.772952 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:33:57.834039 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:33:57.837641 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:33:57.837741 1633651 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 06:33:57.899083 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:57.899158 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:57.899422 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:58.398161 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:58.398242 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:58.398586 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:58.898310 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:58.898386 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:58.898742 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:59.398422 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:59.398493 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:59.398756 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:59.898258 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:59.898331 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:59.898686 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:59.898740 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:00.327789 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:34:00.398990 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:00.399071 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:00.399382 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:00.427909 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:34:00.431971 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:34:00.432103 1633651 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 06:34:00.437092 1633651 out.go:179] * Enabled addons: 
	I1216 06:34:00.440884 1633651 addons.go:530] duration metric: took 1m33.562192947s for enable addons: enabled=[]
	I1216 06:34:00.898292 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:00.898392 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:00.898707 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:01.398307 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:01.398389 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:01.398711 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:01.898244 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:01.898311 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:01.898577 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:02.398409 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:02.398488 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:02.398818 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:02.398876 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:02.898375 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:02.898452 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:02.898792 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:03.398249 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:03.398319 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:03.398577 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:03.898262 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:03.898340 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:03.898676 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:04.398263 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:04.398335 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:04.398654 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:04.898325 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:04.898400 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:04.898742 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:04.898801 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:05.398291 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:05.398382 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:05.398957 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:05.898686 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:05.898768 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:05.899122 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:06.398925 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:06.399010 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:06.399354 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:06.898972 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:06.899043 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:06.899401 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:06.899475 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:07.399211 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:07.399289 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:07.399665 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:07.898337 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:07.898421 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:07.898704 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:08.398265 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:08.398348 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:08.398682 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:08.898384 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:08.898460 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:08.898748 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:09.399015 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:09.399090 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:09.399360 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:09.399412 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:09.899197 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:09.899275 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:09.899628 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:10.398251 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:10.398324 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:10.398662 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:10.898348 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:10.898422 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:10.898716 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:11.398281 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:11.398362 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:11.398704 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:11.898290 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:11.898370 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:11.898687 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:11.898743 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:12.398541 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:12.398609 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:12.398881 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:12.898637 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:12.898723 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:12.899079 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:13.398865 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:13.398945 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:13.399273 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:13.899072 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:13.899151 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:13.899501 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:13.899561 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:14.398235 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:14.398316 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:14.398658 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:14.898363 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:14.898442 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:14.898813 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:15.398508 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:15.398583 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:15.398859 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:15.898280 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:15.898359 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:15.898662 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:16.398298 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:16.398373 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:16.398713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:16.398775 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:16.898203 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:16.898272 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:16.898528 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:17.398515 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:17.398598 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:17.398936 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:17.898289 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:17.898362 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:17.898713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:18.398422 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:18.398498 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:18.398771 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:18.398820 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:18.898251 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:18.898331 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:18.898653 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:19.398357 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:19.398446 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:19.398791 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:19.898510 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:19.898589 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:19.898872 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:20.398266 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:20.398359 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:20.398763 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:20.898254 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:20.898340 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:20.898695 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:20.898758 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:21.398239 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:21.398316 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:21.398590 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:21.898277 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:21.898350 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:21.898851 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:22.398811 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:22.398886 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:22.399204 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:22.898972 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:22.899048 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:22.899306 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:22.899351 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:23.399107 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:23.399181 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:23.399518 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:23.898252 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:23.898332 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:23.898659 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:24.398280 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:24.398364 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:24.398714 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:24.898279 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:24.898358 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:24.898719 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:25.398435 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:25.398518 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:25.398899 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:25.398964 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:25.898643 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:25.898718 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:25.898991 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:26.398257 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:26.398331 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:26.398659 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:26.898526 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:26.898612 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:26.899075 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:27.398249 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:27.398364 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:27.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:27.898275 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:27.898350 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:27.898713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:27.898798 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:28.398464 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:28.398539 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:28.398917 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:28.898624 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:28.898699 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:28.899014 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:29.398802 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:29.398878 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:29.399221 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:29.898995 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:29.899075 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:29.899431 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:29.899497 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:30.398215 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:30.398295 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:30.398549 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:30.898232 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:30.898309 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:30.898674 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:31.398411 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:31.398493 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:31.398835 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:31.898249 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:31.898315 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:31.898624 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:32.398296 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:32.398371 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:32.398696 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:32.398762 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:32.898447 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:32.898526 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:32.898844 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:33.398245 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:33.398318 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:33.398582 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:33.898259 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:33.898355 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:33.898652 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:34.398311 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:34.398386 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:34.398737 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:34.398791 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:34.898273 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:34.898347 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:34.898671 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:35.398244 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:35.398321 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:35.398665 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:35.898264 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:35.898348 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:35.898663 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:36.398349 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:36.398430 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:36.398756 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:36.898879 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:36.898962 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:36.899298 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:36.899363 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:37.398940 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:37.399018 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:37.399339 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:37.899128 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:37.899202 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:37.899475 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:38.398196 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:38.398276 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:38.398617 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:38.898346 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:38.898424 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:38.898788 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:39.398232 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:39.398304 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:39.398637 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:39.398705 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:39.898341 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:39.898419 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:39.898791 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:40.398499 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:40.398574 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:40.398963 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:40.898635 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:40.898719 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:40.899009 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:41.398866 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:41.398958 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:41.399281 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:41.399336 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:41.899108 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:41.899190 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:41.899541 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:42.398226 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:42.398314 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:42.398588 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:42.898199 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:42.898320 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:42.898686 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:43.398433 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:43.398510 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:43.398874 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:43.898570 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:43.898642 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:43.898913 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:43.898966 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:44.398296 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:44.398371 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:44.398701 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:44.898279 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:44.898356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:44.898693 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:45.398553 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:45.398755 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:45.399042 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:45.898881 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:45.898964 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:45.899318 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:45.899373 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:46.399167 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:46.399253 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:46.399612 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:46.898505 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:46.898584 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:46.898871 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:47.399034 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:47.399118 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:47.399524 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:47.898288 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:47.898367 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:47.898724 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:48.398399 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:48.398476 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:48.398811 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:48.398865 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:48.898261 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:48.898347 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:48.898763 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:49.398479 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:49.398552 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:49.398921 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:49.898219 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:49.898296 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:49.898632 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:50.398260 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:50.398336 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:50.398681 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:50.898398 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:50.898476 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:50.898813 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:50.898869 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:51.398266 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:51.398349 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:51.398666 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:51.898271 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:51.898358 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:51.898714 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:52.398270 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:52.398364 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:52.398695 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:52.898254 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:52.898333 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:52.898667 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:53.398349 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:53.398437 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:53.398787 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:53.398846 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:53.898285 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:53.898362 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:53.898735 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:54.398284 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:54.398352 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:54.398640 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:54.898273 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:54.898356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:54.898672 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:55.398276 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:55.398360 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:55.398754 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:55.898236 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:55.898309 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:55.898640 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:55.898694 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:56.398347 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:56.398429 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:56.398783 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:56.898669 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:56.898747 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:56.899097 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:57.399054 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:57.399128 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:57.399397 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:57.898166 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:57.898252 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:57.898582 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:58.398281 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:58.398356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:58.398693 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:58.398750 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:58.898237 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:58.898341 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:58.898734 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:59.398285 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:59.398370 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:59.398734 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:59.898309 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:59.898391 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:59.898719 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:00.414820 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:00.414906 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:00.415201 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:00.415247 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:00.899080 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:00.899160 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:00.899488 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:01.398203 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:01.398286 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:01.398658 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:01.898381 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:01.898453 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:01.898741 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:02.398760 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:02.398842 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:02.399198 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:02.898874 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:02.898953 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:02.899310 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:02.899364 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:03.399127 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:03.399199 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:03.399477 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:03.898183 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:03.898263 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:03.898574 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:04.398285 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:04.398363 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:04.398713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:04.898409 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:04.898488 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:04.898770 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:05.398281 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:05.398358 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:05.398689 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:05.398747 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:05.898283 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:05.898360 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:05.898696 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:06.398276 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:06.398344 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:06.398628 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:06.898700 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:06.898789 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:06.899156 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:07.399150 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:07.399230 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:07.399559 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:07.399618 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:07.898272 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:07.898347 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:07.898685 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:08.398266 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:08.398340 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:08.398691 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:08.898270 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:08.898346 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:08.898712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:09.398403 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:09.398475 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:09.398741 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:09.898260 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:09.898336 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:09.898699 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:09.898756 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:10.398423 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:10.398500 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:10.398892 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:10.898626 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:10.898722 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:10.899103 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:11.398911 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:11.399006 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:11.399384 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:11.898151 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:11.898224 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:11.898573 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:12.398258 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:12.398328 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:12.398616 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:12.398695 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:12.898253 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:12.898331 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:12.898683 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:13.398383 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:13.398463 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:13.398838 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:13.898531 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:13.898612 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:13.898894 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:14.398278 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:14.398350 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:14.398713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:14.398765 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:14.898300 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:14.898380 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:14.898747 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:15.398438 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:15.398508 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:15.398778 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:15.898259 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:15.898333 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:15.898664 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:16.398285 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:16.398368 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:16.398712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:16.898532 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:16.898606 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:16.898878 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:16.898924 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:17.398589 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:17.398661 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:17.398959 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:17.898673 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:17.898753 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:17.899078 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:18.398855 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:18.398925 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:18.399198 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:18.898973 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:18.899048 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:18.899383 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:18.899438 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:19.399095 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:19.399174 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:19.399532 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:19.898245 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:19.898323 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:19.898607 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:20.398269 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:20.398343 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:20.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:20.898294 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:20.898374 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:20.898722 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:21.398418 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:21.398486 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:21.398764 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:21.398806 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:21.898283 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:21.898369 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:21.898714 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:22.398289 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:22.398365 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:22.398713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:22.898223 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:22.898294 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:22.898644 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:23.398337 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:23.398411 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:23.398738 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:23.898470 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:23.898573 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:23.898929 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:23.898986 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:24.398626 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:24.398696 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:24.398974 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:24.898302 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:24.898387 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:24.898855 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:25.398279 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:25.398361 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:25.398718 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:25.898396 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:25.898463 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:25.898752 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:26.398331 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:26.398440 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:26.398776 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:26.398836 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:26.898830 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:26.898904 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:26.899295 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:27.399107 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:27.399188 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:27.399497 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:27.898182 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:27.898260 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:27.898590 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:28.398292 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:28.398373 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:28.398733 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:28.898316 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:28.898394 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:28.898672 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:28.898717 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:29.398311 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:29.398408 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:29.398849 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:29.898311 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:29.898399 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:29.898772 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:30.398270 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:30.398340 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:30.398653 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:30.898259 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:30.898340 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:30.898667 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:31.398286 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:31.398365 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:31.398695 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:31.398752 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:31.898305 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:31.898393 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:31.898777 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:32.398810 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:32.398883 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:32.399201 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:32.899041 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:32.899121 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:32.899453 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:33.398148 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:33.398223 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:33.398492 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:33.898278 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:33.898353 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:33.898736 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:33.898787 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:34.398447 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:34.398528 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:34.398873 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:34.898221 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:34.898312 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:34.898605 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:35.398303 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:35.398382 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:35.398734 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:35.898472 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:35.898554 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:35.898882 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:35.898940 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:36.398373 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:36.398454 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:36.398749 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:36.898854 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:36.898926 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:36.899222 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:37.398175 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:37.398272 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:37.398626 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:37.898231 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:37.898296 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:37.898642 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:38.398279 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:38.398350 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:38.398708 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:38.398766 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:38.898476 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:38.898554 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:38.898890 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:39.398379 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:39.398485 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:39.398800 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:39.898316 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:39.898391 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:39.898769 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:40.398507 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:40.398604 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:40.398907 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:40.398953 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:40.898245 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:40.898335 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:40.898635 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:41.398325 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:41.398402 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:41.398863 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:41.898282 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:41.898365 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:41.898709 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:42.398319 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:42.398385 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:42.398670 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:42.898305 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:42.898377 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:42.898704 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:42.898763 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:43.398283 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:43.398356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:43.398701 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:43.898384 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:43.898461 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:43.898733 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:44.398250 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:44.398345 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:44.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:44.898255 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:44.898335 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:44.898712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:45.398244 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:45.398321 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:45.398663 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:45.398717 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:45.898312 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:45.898398 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:45.898773 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:46.398512 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:46.398593 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:46.398928 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:46.898755 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:46.898837 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:46.899103 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:47.399074 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:47.399155 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:47.399470 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:47.399520 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:47.898232 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:47.898309 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:47.898683 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:48.398447 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:48.398547 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:48.398895 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:48.898281 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:48.898356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:48.898703 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:49.398425 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:49.398500 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:49.398876 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:49.898573 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:49.898645 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:49.899024 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:49.899073 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:50.398808 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:50.398884 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:50.399215 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:50.898894 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:50.898974 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:50.899314 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:51.399073 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:51.399145 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:51.399405 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:51.899204 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:51.899286 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:51.899637 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:51.899692 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:52.398394 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:52.398470 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:52.398814 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:52.898245 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:52.898334 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:52.898628 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:53.398281 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:53.398365 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:53.398736 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:53.898467 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:53.898549 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:53.898914 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:54.398587 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:54.398670 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:54.398930 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:54.398971 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:54.898283 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:54.898362 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:54.898706 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:55.398429 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:55.398501 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:55.398821 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:55.898239 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:55.898308 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:55.898643 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:56.398282 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:56.398367 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:56.398726 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:56.898588 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:56.898668 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:56.899021 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:56.899088 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:57.398828 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:57.398910 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:57.399188 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:57.898996 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:57.899073 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:57.899382 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:58.399133 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:58.399235 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:58.399594 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:58.898219 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:58.898452 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:58.898861 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:59.398261 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:59.398348 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:59.398686 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:59.398752 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:59.898268 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:59.898357 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:59.898715 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:00.399357 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:00.399435 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:00.399772 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:00.898475 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:00.898558 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:00.898912 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:01.398629 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:01.398704 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:01.399062 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:01.399123 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:01.898881 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:01.898960 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:01.899233 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:02.399234 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:02.399313 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:02.399704 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:02.898296 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:02.898382 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:02.898715 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:03.398263 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:03.398346 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:03.398641 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:03.898276 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:03.898359 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:03.898695 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:03.898751 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:04.398291 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:04.398413 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:04.398743 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:04.898440 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:04.898518 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:04.898790 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:05.398493 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:05.398570 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:05.398895 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:05.898635 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:05.898712 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:05.899049 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:05.899102 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:06.398845 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:06.398927 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:06.399275 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:06.899212 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:06.899287 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:06.899619 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:07.398278 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:07.398388 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:07.398739 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:07.898423 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:07.898501 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:07.898769 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:08.398271 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:08.398361 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:08.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:08.398759 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:08.898430 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:08.898507 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:08.898855 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:09.398214 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:09.398290 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:09.398601 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:09.898266 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:09.898350 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:09.898705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:10.398295 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:10.398377 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:10.398707 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:10.898349 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:10.898425 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:10.898702 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:10.898757 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:11.398292 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:11.398366 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:11.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:11.898435 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:11.898509 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:11.898839 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:12.398738 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:12.398804 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:12.399069 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:12.898825 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:12.898900 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:12.899217 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:12.899278 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:13.399064 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:13.399138 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:13.399479 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:13.898174 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:13.898254 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:13.898539 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:14.398296 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:14.398371 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:14.398712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:14.898437 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:14.898518 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:14.898877 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:15.398539 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:15.398617 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:15.398894 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:15.398947 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:15.898294 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:15.898402 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:15.898784 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:16.398330 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:16.398408 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:16.398731 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:16.898535 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:16.898609 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:16.898886 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:17.398882 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:17.398955 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:17.399291 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:17.399351 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:17.899139 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:17.899220 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:17.899551 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:18.398232 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:18.398362 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:18.398620 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:18.898277 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:18.898354 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:18.898649 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:19.398247 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:19.398346 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:19.398683 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:19.898387 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:19.898473 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:19.898758 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:19.898804 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:20.398296 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:20.398369 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:20.398689 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:20.898334 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:20.898415 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:20.898762 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:21.398456 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:21.398532 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:21.398795 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:21.898309 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:21.898383 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:21.898723 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:22.398748 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:22.398819 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:22.399287 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:22.399332 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:22.899045 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:22.899124 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:22.899438 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:23.398179 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:23.398299 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:23.398688 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:23.898298 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:23.898382 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:23.898729 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:24.398222 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:24.398296 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:24.398629 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:24.898320 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:24.898394 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:24.898747 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:24.898810 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:25.398296 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:25.398380 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:25.398720 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:25.898403 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:25.898472 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:25.898736 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:26.398281 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:26.398355 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:26.398676 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:26.898649 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:26.898727 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:26.899069 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:26.899125 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:27.398556 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:27.398654 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:27.398964 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:27.898756 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:27.898845 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:27.899194 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:28.398978 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:28.399057 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:28.399387 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:28.899171 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:28.899242 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:28.899511 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:28.899553 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:29.398265 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:29.398345 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:29.398698 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:29.898282 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:29.898357 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:29.898723 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:30.398293 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:30.398372 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:30.398656 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:30.898379 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:30.898467 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:30.898858 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:31.398431 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:31.398506 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:31.398844 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:31.398900 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:31.898545 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:31.898622 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:31.898916 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:32.398834 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:32.398911 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:32.399252 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:32.899021 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:32.899098 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:32.899424 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:33.398133 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:33.398202 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:33.398473 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:33.898147 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:33.898235 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:33.898584 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:33.898642 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:34.398163 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:34.398242 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:34.398591 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:34.898191 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:34.898275 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:34.898568 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:35.398271 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:35.398363 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:35.398707 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:35.898320 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:35.898407 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:35.898755 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:35.898810 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:36.398446 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:36.398521 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:36.398786 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:36.898729 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:36.898812 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:36.899129 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:37.399112 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:37.399185 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:37.399511 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:37.898225 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:37.898304 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:37.898568 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:38.398267 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:38.398343 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:38.398710 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:38.398764 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:38.898279 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:38.898353 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:38.898729 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:39.398240 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:39.398351 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:39.398667 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:39.898287 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:39.898369 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:39.898673 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:40.398360 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:40.398435 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:40.398766 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:40.398819 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:40.898241 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:40.898314 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:40.898637 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:41.398298 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:41.398376 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:41.398683 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:41.898412 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:41.898487 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:41.898821 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:42.398242 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:42.398318 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:42.398580 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:42.898280 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:42.898355 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:42.898692 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:42.898748 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:43.398416 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:43.398491 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:43.398846 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:43.898235 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:43.898329 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:43.898615 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:44.398291 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:44.398366 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:44.398722 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:44.898411 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:44.898483 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:44.898775 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:44.898824 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:45.398248 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:45.398345 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:45.398675 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:45.898365 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:45.898459 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:45.898837 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:46.398280 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:46.398361 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:46.398716 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:46.898502 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:46.898576 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:46.898840 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:46.898879 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:47.398781 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:47.398852 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:47.399176 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:47.898950 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:47.899024 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:47.899371 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:48.399121 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:48.399194 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:48.399456 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:48.899245 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:48.899322 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:48.899641 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:48.899693 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:49.398288 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:49.398370 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:49.398748 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:49.898250 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:49.898327 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:49.898652 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:50.398271 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:50.398347 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:50.398703 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:50.898421 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:50.898500 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:50.898849 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:51.398536 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:51.398624 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:51.398900 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:51.398944 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:51.898273 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:51.898353 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:51.898689 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:52.398314 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:52.398399 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:52.398737 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:52.898253 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:52.898349 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:52.898676 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:53.398281 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:53.398352 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:53.398708 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:53.898301 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:53.898380 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:53.898717 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:53.898780 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:54.398263 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:54.398358 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:54.398690 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:54.898290 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:54.898368 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:54.898745 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:55.398460 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:55.398541 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:55.398872 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:55.898241 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:55.898317 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:55.898573 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:56.398264 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:56.398341 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:56.398665 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:56.398721 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:56.898737 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:56.898816 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:56.899137 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:57.399000 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:57.399068 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:57.399335 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:57.899058 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:57.899134 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:57.899469 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:58.398223 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:58.398317 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:58.398690 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:58.398749 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:58.898385 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:58.898460 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:58.898722 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:59.398260 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:59.398337 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:59.398712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:59.898268 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:59.898343 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:59.898667 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:00.398403 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:00.398481 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:00.398778 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:00.398824 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:00.898298 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:00.898373 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:00.898697 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:01.398432 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:01.398511 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:01.398865 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:01.898261 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:01.898333 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:01.898600 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:02.398363 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:02.398458 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:02.398848 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:02.398903 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:02.898598 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:02.898677 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:02.899033 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:03.398801 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:03.398882 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:03.399146 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:03.898939 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:03.899014 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:03.899351 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:04.399028 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:04.399109 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:04.399429 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:04.399479 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:04.898171 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:04.898241 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:04.898523 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:05.398299 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:05.398375 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:05.398691 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:05.898283 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:05.898372 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:05.898673 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:06.398257 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:06.398336 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:06.398612 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:06.898577 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:06.898653 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:06.899006 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:06.899062 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:07.398886 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:07.398973 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:07.399304 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:07.899089 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:07.899159 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:07.899439 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:08.399244 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:08.399316 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:08.399642 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:08.898339 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:08.898425 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:08.898755 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:09.398430 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:09.398498 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:09.398754 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:09.398796 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:09.898279 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:09.898378 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:09.898704 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:10.398393 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:10.398469 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:10.398815 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:10.898372 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:10.898442 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:10.898709 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:11.398311 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:11.398389 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:11.398776 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:11.398848 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:11.898377 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:11.898455 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:11.898804 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:12.398256 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:12.398324 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:12.398587 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:12.898265 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:12.898339 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:12.898691 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:13.398375 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:13.398449 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:13.398799 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:13.898228 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:13.898308 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:13.898581 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:13.898622 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:14.398260 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:14.398340 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:14.398658 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:14.898262 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:14.898344 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:14.898707 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:15.398332 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:15.398408 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:15.398675 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:15.898289 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:15.898368 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:15.898652 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:15.898699 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:16.398290 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:16.398365 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:16.398727 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:16.898702 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:16.898784 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:16.899056 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:17.398983 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:17.399055 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:17.399412 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:17.899241 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:17.899319 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:17.899615 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:17.899667 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:18.398328 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:18.398395 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:18.398676 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:18.898311 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:18.898389 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:18.898756 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:19.398447 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:19.398552 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:19.398855 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:19.898524 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:19.898598 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:19.898881 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:20.398260 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:20.398339 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:20.398672 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:20.398727 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:20.898288 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:20.898361 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:20.898685 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:21.398238 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:21.398309 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:21.398582 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:21.898316 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:21.898391 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:21.898740 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:22.398286 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:22.398366 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:22.398717 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:22.398773 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:22.898431 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:22.898499 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:22.898765 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:23.398284 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:23.398368 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:23.398729 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:23.898447 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:23.898524 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:23.898868 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:24.398560 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:24.398637 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:24.398927 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:24.398969 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:24.898283 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:24.898357 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:24.898696 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:25.398285 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:25.398368 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:25.398721 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:25.898222 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:25.898307 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:25.898627 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:26.398280 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:26.398362 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:26.398712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:26.898725 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:26.898800 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:26.899142 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:26.899196 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:27.398976 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:27.399052 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:27.399314 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:27.899092 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:27.899164 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:27.899471 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:28.398223 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:28.398299 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:28.398602 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:28.898256 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:28.898325 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:28.898655 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:29.398287 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:29.398360 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:29.398693 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:29.398750 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:29.898408 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:29.898505 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:29.898906 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:30.398225 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:30.398302 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:30.398631 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:30.898286 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:30.898375 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:30.898730 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:31.398439 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:31.398517 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:31.398856 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:31.398911 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:31.898555 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:31.898623 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:31.898889 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:32.398937 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:32.399013 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:32.399352 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:32.899143 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:32.899220 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:32.899571 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:33.398155 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:33.398227 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:33.398484 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:33.898182 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:33.898255 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:33.898595 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:33.898651 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:34.398324 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:34.398396 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:34.398738 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:34.898420 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:34.898491 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:34.898769 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:35.398292 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:35.398369 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:35.398658 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:35.898356 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:35.898432 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:35.898728 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:35.898819 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:36.398478 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:36.398549 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:36.398814 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:36.898859 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:36.898933 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:36.899273 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:37.399136 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:37.399213 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:37.399567 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:37.898258 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:37.898329 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:37.898588 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:38.398300 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:38.398379 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:38.398666 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:38.398713 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:38.898281 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:38.898356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:38.898708 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:39.398215 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:39.398283 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:39.398608 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:39.898292 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:39.898365 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:39.898735 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:40.398290 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:40.398419 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:40.398713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:40.398761 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:40.898223 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:40.898291 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:40.898631 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:41.398327 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:41.398405 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:41.398732 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:41.898307 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:41.898393 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:41.898757 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:42.398724 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:42.398796 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:42.399059 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:42.399111 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:42.898855 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:42.898936 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:42.899284 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:43.399100 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:43.399176 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:43.399519 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:43.898212 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:43.898287 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:43.898548 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:44.398253 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:44.398333 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:44.398697 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:44.898401 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:44.898475 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:44.898804 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:44.898860 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:45.398241 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:45.398315 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:45.398573 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:45.898329 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:45.898404 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:45.898750 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:46.398288 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:46.398359 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:46.398673 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:46.898698 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:46.898768 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:46.899039 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:46.899080 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:47.398977 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:47.399049 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:47.399400 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:47.899044 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:47.899122 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:47.899468 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:48.398202 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:48.398275 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:48.398540 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:48.898231 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:48.898304 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:48.898650 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:49.398232 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:49.398318 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:49.398653 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:49.398711 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:49.898340 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:49.898415 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:49.898682 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:50.398255 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:50.398337 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:50.398634 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:50.898338 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:50.898429 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:50.898764 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:51.398436 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:51.398506 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:51.398820 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:51.398875 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:51.898249 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:51.898343 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:51.898647 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:52.398329 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:52.398423 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:52.398786 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:52.898247 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:52.898320 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:52.898634 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:53.398288 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:53.398360 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:53.398691 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:53.898307 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:53.898414 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:53.898758 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:53.898813 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:54.398461 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:54.398534 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:54.398794 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:54.898301 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:54.898376 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:54.898766 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:55.398305 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:55.398390 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:55.398708 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:55.898252 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:55.898321 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:55.898601 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:56.398276 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:56.398353 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:56.398704 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:56.398769 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:56.898725 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:56.898806 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:56.899207 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:57.398957 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:57.399027 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:57.399310 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:57.899115 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:57.899188 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:57.899518 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:58.398225 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:58.398296 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:58.398611 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:58.898289 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:58.898363 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:58.898624 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:58.898670 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:59.398281 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:59.398361 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:59.398712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:59.898427 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:59.898517 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:59.898807 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:00.398278 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:00.398364 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:00.399475 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 06:38:00.898197 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:00.898269 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:00.898604 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:01.398343 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:01.398423 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:01.398732 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:01.398781 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:01.898309 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:01.898387 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:01.898662 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:02.398274 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:02.398348 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:02.398666 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:02.898354 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:02.898429 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:02.898739 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:03.398239 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:03.398307 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:03.398615 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:03.898236 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:03.898311 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:03.898646 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:03.898700 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:04.398254 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:04.398336 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:04.398687 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:04.898364 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:04.898443 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:04.898706 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:05.398258 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:05.398338 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:05.398679 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:05.898384 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:05.898464 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:05.898794 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:05.898848 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:06.398478 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:06.398546 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:06.398819 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:06.898821 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:06.898898 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:06.899244 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:07.399095 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:07.399177 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:07.399526 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:07.898233 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:07.898305 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:07.898583 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:08.398273 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:08.398355 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:08.398689 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:08.398747 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:08.898439 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:08.898512 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:08.898861 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:09.398318 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:09.398389 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:09.398662 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:09.898282 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:09.898371 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:09.898697 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:10.398289 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:10.398372 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:10.398704 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:10.898271 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:10.898351 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:10.898646 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:10.898697 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:11.398274 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:11.398356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:11.398699 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:11.898267 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:11.898346 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:11.898692 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:12.398257 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:12.398345 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:12.398686 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:12.898310 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:12.898387 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:12.898713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:12.898765 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:13.398455 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:13.398532 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:13.398909 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:13.898601 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:13.898682 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:13.899003 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:14.398291 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:14.398366 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:14.398694 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:14.898453 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:14.898549 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:14.898911 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:14.898969 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:15.398256 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:15.398338 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:15.398607 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:15.898340 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:15.898416 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:15.898765 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:16.398312 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:16.398390 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:16.398677 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:16.898563 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:16.898635 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:16.898893 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:17.398825 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:17.398897 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:17.399203 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:17.399251 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:17.899015 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:17.899092 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:17.899429 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:18.399192 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:18.399272 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:18.399543 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:18.898305 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:18.898380 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:18.898701 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:19.398329 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:19.398405 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:19.398708 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:19.898230 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:19.898303 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:19.898634 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:19.898691 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:20.398293 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:20.398368 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:20.398701 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:20.898295 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:20.898370 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:20.898697 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:21.398453 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:21.398559 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:21.398856 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:21.898276 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:21.898357 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:21.898729 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:21.898782 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:22.398291 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:22.398376 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:22.398740 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:22.898287 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:22.898360 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:22.898617 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:23.398307 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:23.398396 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:23.398750 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:23.898299 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:23.898375 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:23.898725 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:24.398289 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:24.398356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:24.398635 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:24.398676 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:24.898264 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:24.898338 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:24.898687 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:25.398443 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:25.398523 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:25.398874 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:25.898588 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:25.898660 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:25.898920 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:26.398605 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:26.398677 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:26.399010 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:26.399063 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:26.898789 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:26.898863 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:26.899190 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:27.400218 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:27.400306 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:27.400637 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:27.898246 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:27.898312 1633651 node_ready.go:38] duration metric: took 6m0.000267561s for node "functional-364120" to be "Ready" ...
	I1216 06:38:27.901509 1633651 out.go:203] 
	W1216 06:38:27.904340 1633651 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1216 06:38:27.904359 1633651 out.go:285] * 
	* 
	W1216 06:38:27.906499 1633651 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 06:38:27.909191 1633651 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-arm64 start -p functional-364120 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m6.621055374s for "functional-364120" cluster.
I1216 06:38:28.520536 1599255 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-364120
helpers_test.go:244: (dbg) docker inspect functional-364120:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf",
	        "Created": "2025-12-16T06:24:05.281524036Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1628059,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T06:24:05.346294886Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/hostname",
	        "HostsPath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/hosts",
	        "LogPath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf-json.log",
	        "Name": "/functional-364120",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-364120:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-364120",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf",
	                "LowerDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752-init/diff:/var/lib/docker/overlay2/bf9e5e3f04a34ae52d17b5e81aeacb3854428b2bda7b4fcb7e1d86558db759ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752/merged",
	                "UpperDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752/diff",
	                "WorkDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-364120",
	                "Source": "/var/lib/docker/volumes/functional-364120/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-364120",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-364120",
	                "name.minikube.sigs.k8s.io": "functional-364120",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ca8e444af5ea4dc220aae407b23205e89ee2c7bfaf0d7da28c0fa8a6e9438a0b",
	            "SandboxKey": "/var/run/docker/netns/ca8e444af5ea",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34260"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34261"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34264"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34262"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34263"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-364120": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:28:ec:c3:f0:f5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a6847428577f52c75d7f6ab7a92b3395c1204da1608971d5af98d3898a2210da",
	                    "EndpointID": "e579fd8a0ba117da836073d37b7f617933568bedfc3fb52e056b4772aaddecbf",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-364120",
	                        "8e0dcfb5d015"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-364120 -n functional-364120
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-364120 -n functional-364120: exit status 2 (314.594669ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-364120 logs -n 25: (1.035766761s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-487532 ssh sudo cat /etc/ssl/certs/1599255.pem                                                                                         │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image          │ functional-487532 image rm kicbase/echo-server:functional-487532 --alsologtostderr                                                                │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh            │ functional-487532 ssh sudo cat /usr/share/ca-certificates/1599255.pem                                                                             │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image          │ functional-487532 image ls                                                                                                                        │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image          │ functional-487532 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                               │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh            │ functional-487532 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                          │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh            │ functional-487532 ssh sudo cat /etc/ssl/certs/15992552.pem                                                                                        │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image          │ functional-487532 image ls                                                                                                                        │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh            │ functional-487532 ssh sudo cat /usr/share/ca-certificates/15992552.pem                                                                            │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image          │ functional-487532 image save --daemon kicbase/echo-server:functional-487532 --alsologtostderr                                                     │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh            │ functional-487532 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                          │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh            │ functional-487532 ssh sudo cat /etc/test/nested/copy/1599255/hosts                                                                                │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image          │ functional-487532 image ls --format short --alsologtostderr                                                                                       │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image          │ functional-487532 image ls --format yaml --alsologtostderr                                                                                        │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh            │ functional-487532 ssh pgrep buildkitd                                                                                                             │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │                     │
	│ image          │ functional-487532 image build -t localhost/my-image:functional-487532 testdata/build --alsologtostderr                                            │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image          │ functional-487532 image ls --format json --alsologtostderr                                                                                        │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image          │ functional-487532 image ls --format table --alsologtostderr                                                                                       │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ update-context │ functional-487532 update-context --alsologtostderr -v=2                                                                                           │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ update-context │ functional-487532 update-context --alsologtostderr -v=2                                                                                           │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ update-context │ functional-487532 update-context --alsologtostderr -v=2                                                                                           │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image          │ functional-487532 image ls                                                                                                                        │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ delete         │ -p functional-487532                                                                                                                              │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:24 UTC │
	│ start          │ -p functional-364120 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:24 UTC │                     │
	│ start          │ -p functional-364120 --alsologtostderr -v=8                                                                                                       │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:32 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 06:32:21
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 06:32:21.945678 1633651 out.go:360] Setting OutFile to fd 1 ...
	I1216 06:32:21.945884 1633651 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:32:21.945913 1633651 out.go:374] Setting ErrFile to fd 2...
	I1216 06:32:21.945938 1633651 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:32:21.946236 1633651 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 06:32:21.946683 1633651 out.go:368] Setting JSON to false
	I1216 06:32:21.947701 1633651 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":33293,"bootTime":1765833449,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1216 06:32:21.947809 1633651 start.go:143] virtualization:  
	I1216 06:32:21.951426 1633651 out.go:179] * [functional-364120] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 06:32:21.955191 1633651 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 06:32:21.955256 1633651 notify.go:221] Checking for updates...
	I1216 06:32:21.958173 1633651 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 06:32:21.961154 1633651 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:32:21.964261 1633651 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube
	I1216 06:32:21.967271 1633651 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 06:32:21.970206 1633651 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 06:32:21.973784 1633651 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 06:32:21.973958 1633651 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 06:32:22.008677 1633651 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 06:32:22.008820 1633651 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:32:22.071471 1633651 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 06:32:22.061898568 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 06:32:22.071599 1633651 docker.go:319] overlay module found
	I1216 06:32:22.074586 1633651 out.go:179] * Using the docker driver based on existing profile
	I1216 06:32:22.077482 1633651 start.go:309] selected driver: docker
	I1216 06:32:22.077504 1633651 start.go:927] validating driver "docker" against &{Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:32:22.077607 1633651 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 06:32:22.077718 1633651 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:32:22.133247 1633651 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 06:32:22.124039104 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 06:32:22.133687 1633651 cni.go:84] Creating CNI manager for ""
	I1216 06:32:22.133753 1633651 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 06:32:22.133810 1633651 start.go:353] cluster config:
	{Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:32:22.136881 1633651 out.go:179] * Starting "functional-364120" primary control-plane node in "functional-364120" cluster
	I1216 06:32:22.139682 1633651 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 06:32:22.142506 1633651 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 06:32:22.145532 1633651 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 06:32:22.145589 1633651 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1216 06:32:22.145600 1633651 cache.go:65] Caching tarball of preloaded images
	I1216 06:32:22.145641 1633651 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 06:32:22.145690 1633651 preload.go:238] Found /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 06:32:22.145701 1633651 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1216 06:32:22.145813 1633651 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/config.json ...
	I1216 06:32:22.165180 1633651 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 06:32:22.165200 1633651 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 06:32:22.165222 1633651 cache.go:243] Successfully downloaded all kic artifacts
	I1216 06:32:22.165256 1633651 start.go:360] acquireMachinesLock for functional-364120: {Name:mkbf042218fd4d1baa11f8b1e4a71170f4ad9912 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:32:22.165333 1633651 start.go:364] duration metric: took 48.796µs to acquireMachinesLock for "functional-364120"
	I1216 06:32:22.165354 1633651 start.go:96] Skipping create...Using existing machine configuration
	I1216 06:32:22.165360 1633651 fix.go:54] fixHost starting: 
	I1216 06:32:22.165613 1633651 cli_runner.go:164] Run: docker container inspect functional-364120 --format={{.State.Status}}
	I1216 06:32:22.182587 1633651 fix.go:112] recreateIfNeeded on functional-364120: state=Running err=<nil>
	W1216 06:32:22.182616 1633651 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 06:32:22.185776 1633651 out.go:252] * Updating the running docker "functional-364120" container ...
	I1216 06:32:22.185814 1633651 machine.go:94] provisionDockerMachine start ...
	I1216 06:32:22.185896 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:22.204643 1633651 main.go:143] libmachine: Using SSH client type: native
	I1216 06:32:22.205060 1633651 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1216 06:32:22.205076 1633651 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 06:32:22.340733 1633651 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-364120
	
	I1216 06:32:22.340761 1633651 ubuntu.go:182] provisioning hostname "functional-364120"
	I1216 06:32:22.340833 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:22.359374 1633651 main.go:143] libmachine: Using SSH client type: native
	I1216 06:32:22.359683 1633651 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1216 06:32:22.359701 1633651 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-364120 && echo "functional-364120" | sudo tee /etc/hostname
	I1216 06:32:22.513698 1633651 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-364120
	
	I1216 06:32:22.513777 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:22.532110 1633651 main.go:143] libmachine: Using SSH client type: native
	I1216 06:32:22.532428 1633651 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1216 06:32:22.532445 1633651 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-364120' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-364120/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-364120' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 06:32:22.668828 1633651 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 06:32:22.668856 1633651 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-1596013/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-1596013/.minikube}
	I1216 06:32:22.668881 1633651 ubuntu.go:190] setting up certificates
	I1216 06:32:22.668900 1633651 provision.go:84] configureAuth start
	I1216 06:32:22.668975 1633651 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-364120
	I1216 06:32:22.686750 1633651 provision.go:143] copyHostCerts
	I1216 06:32:22.686794 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 06:32:22.686839 1633651 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem, removing ...
	I1216 06:32:22.686850 1633651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 06:32:22.686924 1633651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem (1675 bytes)
	I1216 06:32:22.687014 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 06:32:22.687038 1633651 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem, removing ...
	I1216 06:32:22.687049 1633651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 06:32:22.687078 1633651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem (1078 bytes)
	I1216 06:32:22.687125 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 06:32:22.687146 1633651 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem, removing ...
	I1216 06:32:22.687154 1633651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 06:32:22.687181 1633651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem (1123 bytes)
	I1216 06:32:22.687234 1633651 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem org=jenkins.functional-364120 san=[127.0.0.1 192.168.49.2 functional-364120 localhost minikube]
	I1216 06:32:22.948191 1633651 provision.go:177] copyRemoteCerts
	I1216 06:32:22.948261 1633651 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 06:32:22.948301 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:22.965164 1633651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:32:23.060207 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1216 06:32:23.060306 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 06:32:23.077647 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1216 06:32:23.077712 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 06:32:23.095215 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1216 06:32:23.095292 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 06:32:23.112813 1633651 provision.go:87] duration metric: took 443.895655ms to configureAuth
	I1216 06:32:23.112841 1633651 ubuntu.go:206] setting minikube options for container-runtime
	I1216 06:32:23.113039 1633651 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 06:32:23.113160 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:23.130832 1633651 main.go:143] libmachine: Using SSH client type: native
	I1216 06:32:23.131171 1633651 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1216 06:32:23.131200 1633651 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 06:32:23.456336 1633651 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 06:32:23.456407 1633651 machine.go:97] duration metric: took 1.270583728s to provisionDockerMachine
	I1216 06:32:23.456430 1633651 start.go:293] postStartSetup for "functional-364120" (driver="docker")
	I1216 06:32:23.456444 1633651 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 06:32:23.456549 1633651 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 06:32:23.456623 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:23.474584 1633651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:32:23.572573 1633651 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 06:32:23.576065 1633651 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1216 06:32:23.576089 1633651 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1216 06:32:23.576094 1633651 command_runner.go:130] > VERSION_ID="12"
	I1216 06:32:23.576099 1633651 command_runner.go:130] > VERSION="12 (bookworm)"
	I1216 06:32:23.576104 1633651 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1216 06:32:23.576107 1633651 command_runner.go:130] > ID=debian
	I1216 06:32:23.576111 1633651 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1216 06:32:23.576116 1633651 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1216 06:32:23.576121 1633651 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1216 06:32:23.576161 1633651 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 06:32:23.576184 1633651 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 06:32:23.576195 1633651 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/addons for local assets ...
	I1216 06:32:23.576257 1633651 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/files for local assets ...
	I1216 06:32:23.576334 1633651 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> 15992552.pem in /etc/ssl/certs
	I1216 06:32:23.576345 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> /etc/ssl/certs/15992552.pem
	I1216 06:32:23.576419 1633651 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/test/nested/copy/1599255/hosts -> hosts in /etc/test/nested/copy/1599255
	I1216 06:32:23.576428 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/test/nested/copy/1599255/hosts -> /etc/test/nested/copy/1599255/hosts
	I1216 06:32:23.576497 1633651 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1599255
	I1216 06:32:23.584272 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 06:32:23.602073 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/test/nested/copy/1599255/hosts --> /etc/test/nested/copy/1599255/hosts (40 bytes)
	I1216 06:32:23.620211 1633651 start.go:296] duration metric: took 163.749097ms for postStartSetup
	I1216 06:32:23.620332 1633651 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 06:32:23.620393 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:23.637607 1633651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:32:23.729817 1633651 command_runner.go:130] > 11%
	I1216 06:32:23.729920 1633651 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 06:32:23.734460 1633651 command_runner.go:130] > 173G
	I1216 06:32:23.734888 1633651 fix.go:56] duration metric: took 1.569523929s for fixHost
	I1216 06:32:23.734910 1633651 start.go:83] releasing machines lock for "functional-364120", held for 1.569567934s
	I1216 06:32:23.734992 1633651 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-364120
	I1216 06:32:23.753392 1633651 ssh_runner.go:195] Run: cat /version.json
	I1216 06:32:23.753419 1633651 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 06:32:23.753445 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:23.753482 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:23.775365 1633651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:32:23.776190 1633651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:32:23.872489 1633651 command_runner.go:130] > {"iso_version": "v1.37.0-1765579389-22117", "kicbase_version": "v0.0.48-1765661130-22141", "minikube_version": "v1.37.0", "commit": "cbb33128a244032d08f8fc6e6c9f03b30f0da3e4"}
	I1216 06:32:23.964085 1633651 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1216 06:32:23.966949 1633651 ssh_runner.go:195] Run: systemctl --version
	I1216 06:32:23.972881 1633651 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1216 06:32:23.972927 1633651 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1216 06:32:23.973332 1633651 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 06:32:24.017041 1633651 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1216 06:32:24.021688 1633651 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1216 06:32:24.021875 1633651 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 06:32:24.021943 1633651 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 06:32:24.030849 1633651 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 06:32:24.030874 1633651 start.go:496] detecting cgroup driver to use...
	I1216 06:32:24.030909 1633651 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:32:24.030973 1633651 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 06:32:24.046872 1633651 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:32:24.060299 1633651 docker.go:218] disabling cri-docker service (if available) ...
	I1216 06:32:24.060392 1633651 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 06:32:24.076826 1633651 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 06:32:24.090325 1633651 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 06:32:24.210022 1633651 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 06:32:24.329836 1633651 docker.go:234] disabling docker service ...
	I1216 06:32:24.329935 1633651 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 06:32:24.345813 1633651 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 06:32:24.359799 1633651 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 06:32:24.482084 1633651 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 06:32:24.592216 1633651 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 06:32:24.607323 1633651 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 06:32:24.620059 1633651 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1216 06:32:24.621570 1633651 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 06:32:24.621685 1633651 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:32:24.630471 1633651 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 06:32:24.630583 1633651 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:32:24.638917 1633651 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:32:24.647722 1633651 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:32:24.656274 1633651 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 06:32:24.664335 1633651 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:32:24.674249 1633651 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:32:24.682423 1633651 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:32:24.691805 1633651 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 06:32:24.699096 1633651 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1216 06:32:24.700134 1633651 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 06:32:24.707996 1633651 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:32:24.828004 1633651 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 06:32:24.995020 1633651 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 06:32:24.995147 1633651 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 06:32:24.998673 1633651 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1216 06:32:24.998710 1633651 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1216 06:32:24.998717 1633651 command_runner.go:130] > Device: 0,73	Inode: 1638        Links: 1
	I1216 06:32:24.998724 1633651 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1216 06:32:24.998732 1633651 command_runner.go:130] > Access: 2025-12-16 06:32:24.929681899 +0000
	I1216 06:32:24.998737 1633651 command_runner.go:130] > Modify: 2025-12-16 06:32:24.929681899 +0000
	I1216 06:32:24.998743 1633651 command_runner.go:130] > Change: 2025-12-16 06:32:24.929681899 +0000
	I1216 06:32:24.998747 1633651 command_runner.go:130] >  Birth: -
	I1216 06:32:24.999054 1633651 start.go:564] Will wait 60s for crictl version
	I1216 06:32:24.999171 1633651 ssh_runner.go:195] Run: which crictl
	I1216 06:32:25.003803 1633651 command_runner.go:130] > /usr/local/bin/crictl
	I1216 06:32:25.003920 1633651 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 06:32:25.030365 1633651 command_runner.go:130] > Version:  0.1.0
	I1216 06:32:25.030401 1633651 command_runner.go:130] > RuntimeName:  cri-o
	I1216 06:32:25.030407 1633651 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1216 06:32:25.030415 1633651 command_runner.go:130] > RuntimeApiVersion:  v1
	I1216 06:32:25.032653 1633651 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 06:32:25.032766 1633651 ssh_runner.go:195] Run: crio --version
	I1216 06:32:25.062220 1633651 command_runner.go:130] > crio version 1.34.3
	I1216 06:32:25.062244 1633651 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1216 06:32:25.062252 1633651 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1216 06:32:25.062258 1633651 command_runner.go:130] >    GitTreeState:   dirty
	I1216 06:32:25.062271 1633651 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1216 06:32:25.062277 1633651 command_runner.go:130] >    GoVersion:      go1.24.6
	I1216 06:32:25.062281 1633651 command_runner.go:130] >    Compiler:       gc
	I1216 06:32:25.062287 1633651 command_runner.go:130] >    Platform:       linux/arm64
	I1216 06:32:25.062295 1633651 command_runner.go:130] >    Linkmode:       static
	I1216 06:32:25.062298 1633651 command_runner.go:130] >    BuildTags:
	I1216 06:32:25.062306 1633651 command_runner.go:130] >      static
	I1216 06:32:25.062310 1633651 command_runner.go:130] >      netgo
	I1216 06:32:25.062314 1633651 command_runner.go:130] >      osusergo
	I1216 06:32:25.062318 1633651 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1216 06:32:25.062324 1633651 command_runner.go:130] >      seccomp
	I1216 06:32:25.062328 1633651 command_runner.go:130] >      apparmor
	I1216 06:32:25.062335 1633651 command_runner.go:130] >      selinux
	I1216 06:32:25.062355 1633651 command_runner.go:130] >    LDFlags:          unknown
	I1216 06:32:25.062366 1633651 command_runner.go:130] >    SeccompEnabled:   true
	I1216 06:32:25.062371 1633651 command_runner.go:130] >    AppArmorEnabled:  false
	I1216 06:32:25.062783 1633651 ssh_runner.go:195] Run: crio --version
	I1216 06:32:25.091083 1633651 command_runner.go:130] > crio version 1.34.3
	I1216 06:32:25.091135 1633651 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1216 06:32:25.091142 1633651 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1216 06:32:25.091169 1633651 command_runner.go:130] >    GitTreeState:   dirty
	I1216 06:32:25.091182 1633651 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1216 06:32:25.091188 1633651 command_runner.go:130] >    GoVersion:      go1.24.6
	I1216 06:32:25.091193 1633651 command_runner.go:130] >    Compiler:       gc
	I1216 06:32:25.091205 1633651 command_runner.go:130] >    Platform:       linux/arm64
	I1216 06:32:25.091210 1633651 command_runner.go:130] >    Linkmode:       static
	I1216 06:32:25.091218 1633651 command_runner.go:130] >    BuildTags:
	I1216 06:32:25.091223 1633651 command_runner.go:130] >      static
	I1216 06:32:25.091226 1633651 command_runner.go:130] >      netgo
	I1216 06:32:25.091230 1633651 command_runner.go:130] >      osusergo
	I1216 06:32:25.091244 1633651 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1216 06:32:25.091254 1633651 command_runner.go:130] >      seccomp
	I1216 06:32:25.091262 1633651 command_runner.go:130] >      apparmor
	I1216 06:32:25.091274 1633651 command_runner.go:130] >      selinux
	I1216 06:32:25.091278 1633651 command_runner.go:130] >    LDFlags:          unknown
	I1216 06:32:25.091282 1633651 command_runner.go:130] >    SeccompEnabled:   true
	I1216 06:32:25.091286 1633651 command_runner.go:130] >    AppArmorEnabled:  false
	I1216 06:32:25.097058 1633651 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1216 06:32:25.100055 1633651 cli_runner.go:164] Run: docker network inspect functional-364120 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 06:32:25.116990 1633651 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 06:32:25.121062 1633651 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1216 06:32:25.121217 1633651 kubeadm.go:884] updating cluster {Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 06:32:25.121338 1633651 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 06:32:25.121400 1633651 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 06:32:25.161132 1633651 command_runner.go:130] > {
	I1216 06:32:25.161156 1633651 command_runner.go:130] >   "images":  [
	I1216 06:32:25.161162 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161171 1633651 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1216 06:32:25.161176 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161183 1633651 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1216 06:32:25.161197 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161202 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161212 1633651 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1216 06:32:25.161220 1633651 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1216 06:32:25.161224 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161229 1633651 command_runner.go:130] >       "size":  "111333938",
	I1216 06:32:25.161237 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.161245 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.161248 1633651 command_runner.go:130] >     },
	I1216 06:32:25.161253 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161267 1633651 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1216 06:32:25.161272 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161278 1633651 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1216 06:32:25.161289 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161295 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161303 1633651 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1216 06:32:25.161313 1633651 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1216 06:32:25.161317 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161325 1633651 command_runner.go:130] >       "size":  "29037500",
	I1216 06:32:25.161333 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.161342 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.161350 1633651 command_runner.go:130] >     },
	I1216 06:32:25.161353 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161360 1633651 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1216 06:32:25.161368 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161373 1633651 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1216 06:32:25.161376 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161380 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161388 1633651 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1216 06:32:25.161400 1633651 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1216 06:32:25.161403 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161408 1633651 command_runner.go:130] >       "size":  "74491780",
	I1216 06:32:25.161415 1633651 command_runner.go:130] >       "username":  "nonroot",
	I1216 06:32:25.161424 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.161431 1633651 command_runner.go:130] >     },
	I1216 06:32:25.161435 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161442 1633651 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1216 06:32:25.161450 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161456 1633651 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1216 06:32:25.161459 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161469 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161477 1633651 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1216 06:32:25.161485 1633651 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1216 06:32:25.161489 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161493 1633651 command_runner.go:130] >       "size":  "60857170",
	I1216 06:32:25.161499 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.161511 1633651 command_runner.go:130] >         "value":  "0"
	I1216 06:32:25.161514 1633651 command_runner.go:130] >       },
	I1216 06:32:25.161529 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.161540 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.161544 1633651 command_runner.go:130] >     },
	I1216 06:32:25.161554 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161567 1633651 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1216 06:32:25.161571 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161578 1633651 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1216 06:32:25.161582 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161588 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161601 1633651 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1216 06:32:25.161614 1633651 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1216 06:32:25.161618 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161623 1633651 command_runner.go:130] >       "size":  "84949999",
	I1216 06:32:25.161631 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.161636 1633651 command_runner.go:130] >         "value":  "0"
	I1216 06:32:25.161639 1633651 command_runner.go:130] >       },
	I1216 06:32:25.161643 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.161647 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.161667 1633651 command_runner.go:130] >     },
	I1216 06:32:25.161675 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161682 1633651 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1216 06:32:25.161686 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161692 1633651 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1216 06:32:25.161701 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161705 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161714 1633651 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1216 06:32:25.161726 1633651 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1216 06:32:25.161730 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161734 1633651 command_runner.go:130] >       "size":  "72170325",
	I1216 06:32:25.161738 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.161743 1633651 command_runner.go:130] >         "value":  "0"
	I1216 06:32:25.161748 1633651 command_runner.go:130] >       },
	I1216 06:32:25.161753 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.161758 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.161761 1633651 command_runner.go:130] >     },
	I1216 06:32:25.161764 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161771 1633651 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1216 06:32:25.161779 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161785 1633651 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1216 06:32:25.161788 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161793 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161801 1633651 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1216 06:32:25.161814 1633651 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1216 06:32:25.161818 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161822 1633651 command_runner.go:130] >       "size":  "74106775",
	I1216 06:32:25.161826 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.161830 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.161836 1633651 command_runner.go:130] >     },
	I1216 06:32:25.161839 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161846 1633651 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1216 06:32:25.161850 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161863 1633651 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1216 06:32:25.161870 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161874 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161882 1633651 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1216 06:32:25.161905 1633651 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1216 06:32:25.161913 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161918 1633651 command_runner.go:130] >       "size":  "49822549",
	I1216 06:32:25.161921 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.161925 1633651 command_runner.go:130] >         "value":  "0"
	I1216 06:32:25.161929 1633651 command_runner.go:130] >       },
	I1216 06:32:25.161933 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.161937 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.161943 1633651 command_runner.go:130] >     },
	I1216 06:32:25.161947 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161956 1633651 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1216 06:32:25.161960 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161965 1633651 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1216 06:32:25.161971 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161975 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161995 1633651 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1216 06:32:25.162003 1633651 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1216 06:32:25.162006 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.162010 1633651 command_runner.go:130] >       "size":  "519884",
	I1216 06:32:25.162013 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.162017 1633651 command_runner.go:130] >         "value":  "65535"
	I1216 06:32:25.162020 1633651 command_runner.go:130] >       },
	I1216 06:32:25.162029 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.162036 1633651 command_runner.go:130] >       "pinned":  true
	I1216 06:32:25.162040 1633651 command_runner.go:130] >     }
	I1216 06:32:25.162043 1633651 command_runner.go:130] >   ]
	I1216 06:32:25.162046 1633651 command_runner.go:130] > }
	I1216 06:32:25.162230 1633651 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 06:32:25.162244 1633651 crio.go:433] Images already preloaded, skipping extraction
	I1216 06:32:25.162311 1633651 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 06:32:25.189040 1633651 command_runner.go:130] > {
	I1216 06:32:25.189061 1633651 command_runner.go:130] >   "images":  [
	I1216 06:32:25.189066 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189085 1633651 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1216 06:32:25.189090 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189096 1633651 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1216 06:32:25.189100 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189103 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189112 1633651 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1216 06:32:25.189120 1633651 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1216 06:32:25.189125 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189133 1633651 command_runner.go:130] >       "size":  "111333938",
	I1216 06:32:25.189141 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.189146 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.189157 1633651 command_runner.go:130] >     },
	I1216 06:32:25.189161 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189168 1633651 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1216 06:32:25.189171 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189177 1633651 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1216 06:32:25.189180 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189184 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189193 1633651 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1216 06:32:25.189201 1633651 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1216 06:32:25.189204 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189208 1633651 command_runner.go:130] >       "size":  "29037500",
	I1216 06:32:25.189212 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.189217 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.189220 1633651 command_runner.go:130] >     },
	I1216 06:32:25.189223 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189230 1633651 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1216 06:32:25.189233 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189239 1633651 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1216 06:32:25.189242 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189246 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189255 1633651 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1216 06:32:25.189263 1633651 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1216 06:32:25.189266 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189270 1633651 command_runner.go:130] >       "size":  "74491780",
	I1216 06:32:25.189274 1633651 command_runner.go:130] >       "username":  "nonroot",
	I1216 06:32:25.189278 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.189281 1633651 command_runner.go:130] >     },
	I1216 06:32:25.189284 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189291 1633651 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1216 06:32:25.189295 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189300 1633651 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1216 06:32:25.189309 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189313 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189322 1633651 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1216 06:32:25.189330 1633651 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1216 06:32:25.189333 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189337 1633651 command_runner.go:130] >       "size":  "60857170",
	I1216 06:32:25.189341 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.189345 1633651 command_runner.go:130] >         "value":  "0"
	I1216 06:32:25.189348 1633651 command_runner.go:130] >       },
	I1216 06:32:25.189357 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.189361 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.189364 1633651 command_runner.go:130] >     },
	I1216 06:32:25.189367 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189375 1633651 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1216 06:32:25.189378 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189384 1633651 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1216 06:32:25.189387 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189391 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189399 1633651 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1216 06:32:25.189407 1633651 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1216 06:32:25.189411 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189420 1633651 command_runner.go:130] >       "size":  "84949999",
	I1216 06:32:25.189423 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.189427 1633651 command_runner.go:130] >         "value":  "0"
	I1216 06:32:25.189431 1633651 command_runner.go:130] >       },
	I1216 06:32:25.189435 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.189439 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.189444 1633651 command_runner.go:130] >     },
	I1216 06:32:25.189453 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189460 1633651 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1216 06:32:25.189464 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189469 1633651 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1216 06:32:25.189473 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189486 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189495 1633651 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1216 06:32:25.189505 1633651 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1216 06:32:25.189508 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189513 1633651 command_runner.go:130] >       "size":  "72170325",
	I1216 06:32:25.189516 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.189524 1633651 command_runner.go:130] >         "value":  "0"
	I1216 06:32:25.189527 1633651 command_runner.go:130] >       },
	I1216 06:32:25.189531 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.189536 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.189539 1633651 command_runner.go:130] >     },
	I1216 06:32:25.189542 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189549 1633651 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1216 06:32:25.189553 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189558 1633651 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1216 06:32:25.189561 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189564 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189572 1633651 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1216 06:32:25.189580 1633651 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1216 06:32:25.189583 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189587 1633651 command_runner.go:130] >       "size":  "74106775",
	I1216 06:32:25.189591 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.189595 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.189597 1633651 command_runner.go:130] >     },
	I1216 06:32:25.189600 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189607 1633651 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1216 06:32:25.189611 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189616 1633651 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1216 06:32:25.189620 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189623 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189631 1633651 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1216 06:32:25.189649 1633651 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1216 06:32:25.189653 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189660 1633651 command_runner.go:130] >       "size":  "49822549",
	I1216 06:32:25.189664 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.189668 1633651 command_runner.go:130] >         "value":  "0"
	I1216 06:32:25.189671 1633651 command_runner.go:130] >       },
	I1216 06:32:25.189675 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.189679 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.189682 1633651 command_runner.go:130] >     },
	I1216 06:32:25.189685 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189691 1633651 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1216 06:32:25.189695 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189700 1633651 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1216 06:32:25.189703 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189707 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189714 1633651 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1216 06:32:25.189722 1633651 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1216 06:32:25.189725 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189729 1633651 command_runner.go:130] >       "size":  "519884",
	I1216 06:32:25.189732 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.189736 1633651 command_runner.go:130] >         "value":  "65535"
	I1216 06:32:25.189740 1633651 command_runner.go:130] >       },
	I1216 06:32:25.189744 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.189748 1633651 command_runner.go:130] >       "pinned":  true
	I1216 06:32:25.189751 1633651 command_runner.go:130] >     }
	I1216 06:32:25.189754 1633651 command_runner.go:130] >   ]
	I1216 06:32:25.189758 1633651 command_runner.go:130] > }
	I1216 06:32:25.192082 1633651 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 06:32:25.192103 1633651 cache_images.go:86] Images are preloaded, skipping loading
	I1216 06:32:25.192110 1633651 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1216 06:32:25.192213 1633651 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-364120 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 06:32:25.192293 1633651 ssh_runner.go:195] Run: crio config
	I1216 06:32:25.241430 1633651 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1216 06:32:25.241454 1633651 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1216 06:32:25.241463 1633651 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1216 06:32:25.241467 1633651 command_runner.go:130] > #
	I1216 06:32:25.241474 1633651 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1216 06:32:25.241481 1633651 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1216 06:32:25.241487 1633651 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1216 06:32:25.241503 1633651 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1216 06:32:25.241507 1633651 command_runner.go:130] > # reload'.
	I1216 06:32:25.241513 1633651 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1216 06:32:25.241520 1633651 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1216 06:32:25.241526 1633651 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1216 06:32:25.241533 1633651 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1216 06:32:25.241546 1633651 command_runner.go:130] > [crio]
	I1216 06:32:25.241552 1633651 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1216 06:32:25.241558 1633651 command_runner.go:130] > # containers images, in this directory.
	I1216 06:32:25.242467 1633651 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1216 06:32:25.242525 1633651 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1216 06:32:25.243204 1633651 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1216 06:32:25.243220 1633651 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1216 06:32:25.243745 1633651 command_runner.go:130] > # imagestore = ""
	I1216 06:32:25.243759 1633651 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1216 06:32:25.243765 1633651 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1216 06:32:25.244384 1633651 command_runner.go:130] > # storage_driver = "overlay"
	I1216 06:32:25.244405 1633651 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1216 06:32:25.244412 1633651 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1216 06:32:25.244775 1633651 command_runner.go:130] > # storage_option = [
	I1216 06:32:25.245138 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.245151 1633651 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1216 06:32:25.245190 1633651 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1216 06:32:25.245804 1633651 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1216 06:32:25.245817 1633651 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1216 06:32:25.245829 1633651 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1216 06:32:25.245834 1633651 command_runner.go:130] > # always happen on a node reboot
	I1216 06:32:25.246485 1633651 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1216 06:32:25.246511 1633651 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1216 06:32:25.246534 1633651 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1216 06:32:25.246545 1633651 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1216 06:32:25.247059 1633651 command_runner.go:130] > # version_file_persist = ""
	I1216 06:32:25.247081 1633651 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1216 06:32:25.247091 1633651 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1216 06:32:25.247784 1633651 command_runner.go:130] > # internal_wipe = true
	I1216 06:32:25.247805 1633651 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1216 06:32:25.247812 1633651 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1216 06:32:25.248459 1633651 command_runner.go:130] > # internal_repair = true
	I1216 06:32:25.248493 1633651 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1216 06:32:25.248501 1633651 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1216 06:32:25.248507 1633651 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1216 06:32:25.249140 1633651 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1216 06:32:25.249157 1633651 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1216 06:32:25.249161 1633651 command_runner.go:130] > [crio.api]
	I1216 06:32:25.249167 1633651 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1216 06:32:25.251400 1633651 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1216 06:32:25.251419 1633651 command_runner.go:130] > # IP address on which the stream server will listen.
	I1216 06:32:25.251426 1633651 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1216 06:32:25.251453 1633651 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1216 06:32:25.251465 1633651 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1216 06:32:25.251470 1633651 command_runner.go:130] > # stream_port = "0"
	I1216 06:32:25.251476 1633651 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1216 06:32:25.251480 1633651 command_runner.go:130] > # stream_enable_tls = false
	I1216 06:32:25.251487 1633651 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1216 06:32:25.251494 1633651 command_runner.go:130] > # stream_idle_timeout = ""
	I1216 06:32:25.251501 1633651 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1216 06:32:25.251510 1633651 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1216 06:32:25.251527 1633651 command_runner.go:130] > # stream_tls_cert = ""
	I1216 06:32:25.251540 1633651 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1216 06:32:25.251546 1633651 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1216 06:32:25.251563 1633651 command_runner.go:130] > # stream_tls_key = ""
	I1216 06:32:25.251575 1633651 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1216 06:32:25.251585 1633651 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1216 06:32:25.251591 1633651 command_runner.go:130] > # automatically pick up the changes.
	I1216 06:32:25.251603 1633651 command_runner.go:130] > # stream_tls_ca = ""
	I1216 06:32:25.251622 1633651 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1216 06:32:25.251658 1633651 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1216 06:32:25.251672 1633651 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1216 06:32:25.251677 1633651 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1216 06:32:25.251692 1633651 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1216 06:32:25.251703 1633651 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1216 06:32:25.251707 1633651 command_runner.go:130] > [crio.runtime]
	I1216 06:32:25.251713 1633651 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1216 06:32:25.251719 1633651 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1216 06:32:25.251735 1633651 command_runner.go:130] > # "nofile=1024:2048"
	I1216 06:32:25.251746 1633651 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1216 06:32:25.251751 1633651 command_runner.go:130] > # default_ulimits = [
	I1216 06:32:25.251754 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.251760 1633651 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1216 06:32:25.251767 1633651 command_runner.go:130] > # no_pivot = false
	I1216 06:32:25.251773 1633651 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1216 06:32:25.251779 1633651 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1216 06:32:25.251788 1633651 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1216 06:32:25.251794 1633651 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1216 06:32:25.251799 1633651 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1216 06:32:25.251815 1633651 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1216 06:32:25.251827 1633651 command_runner.go:130] > # conmon = ""
	I1216 06:32:25.251832 1633651 command_runner.go:130] > # Cgroup setting for conmon
	I1216 06:32:25.251838 1633651 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1216 06:32:25.251853 1633651 command_runner.go:130] > conmon_cgroup = "pod"
	I1216 06:32:25.251866 1633651 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1216 06:32:25.251872 1633651 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1216 06:32:25.251879 1633651 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1216 06:32:25.251884 1633651 command_runner.go:130] > # conmon_env = [
	I1216 06:32:25.251887 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.251893 1633651 command_runner.go:130] > # Additional environment variables to set for all the
	I1216 06:32:25.251898 1633651 command_runner.go:130] > # containers. These are overridden if set in the
	I1216 06:32:25.251906 1633651 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1216 06:32:25.251910 1633651 command_runner.go:130] > # default_env = [
	I1216 06:32:25.251931 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.251956 1633651 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1216 06:32:25.251970 1633651 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1216 06:32:25.251982 1633651 command_runner.go:130] > # selinux = false
	I1216 06:32:25.251995 1633651 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1216 06:32:25.252003 1633651 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1216 06:32:25.252037 1633651 command_runner.go:130] > # This option supports live configuration reload.
	I1216 06:32:25.252047 1633651 command_runner.go:130] > # seccomp_profile = ""
	I1216 06:32:25.252055 1633651 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1216 06:32:25.252060 1633651 command_runner.go:130] > # This option supports live configuration reload.
	I1216 06:32:25.252066 1633651 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1216 06:32:25.252073 1633651 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1216 06:32:25.252082 1633651 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1216 06:32:25.252088 1633651 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1216 06:32:25.252097 1633651 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1216 06:32:25.252125 1633651 command_runner.go:130] > # This option supports live configuration reload.
	I1216 06:32:25.252136 1633651 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1216 06:32:25.252147 1633651 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1216 06:32:25.252161 1633651 command_runner.go:130] > # the cgroup blockio controller.
	I1216 06:32:25.252165 1633651 command_runner.go:130] > # blockio_config_file = ""
	I1216 06:32:25.252172 1633651 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1216 06:32:25.252176 1633651 command_runner.go:130] > # blockio parameters.
	I1216 06:32:25.252182 1633651 command_runner.go:130] > # blockio_reload = false
	I1216 06:32:25.252207 1633651 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1216 06:32:25.252224 1633651 command_runner.go:130] > # irqbalance daemon.
	I1216 06:32:25.252230 1633651 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1216 06:32:25.252251 1633651 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1216 06:32:25.252260 1633651 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1216 06:32:25.252270 1633651 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1216 06:32:25.252276 1633651 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1216 06:32:25.252283 1633651 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1216 06:32:25.252291 1633651 command_runner.go:130] > # This option supports live configuration reload.
	I1216 06:32:25.252295 1633651 command_runner.go:130] > # rdt_config_file = ""
	I1216 06:32:25.252300 1633651 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1216 06:32:25.252305 1633651 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1216 06:32:25.252321 1633651 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1216 06:32:25.252339 1633651 command_runner.go:130] > # separate_pull_cgroup = ""
	I1216 06:32:25.252356 1633651 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1216 06:32:25.252372 1633651 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1216 06:32:25.252380 1633651 command_runner.go:130] > # will be added.
	I1216 06:32:25.252385 1633651 command_runner.go:130] > # default_capabilities = [
	I1216 06:32:25.252388 1633651 command_runner.go:130] > # 	"CHOWN",
	I1216 06:32:25.252392 1633651 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1216 06:32:25.252405 1633651 command_runner.go:130] > # 	"FSETID",
	I1216 06:32:25.252411 1633651 command_runner.go:130] > # 	"FOWNER",
	I1216 06:32:25.252415 1633651 command_runner.go:130] > # 	"SETGID",
	I1216 06:32:25.252431 1633651 command_runner.go:130] > # 	"SETUID",
	I1216 06:32:25.252493 1633651 command_runner.go:130] > # 	"SETPCAP",
	I1216 06:32:25.252505 1633651 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1216 06:32:25.252509 1633651 command_runner.go:130] > # 	"KILL",
	I1216 06:32:25.252512 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.252520 1633651 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1216 06:32:25.252530 1633651 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1216 06:32:25.252534 1633651 command_runner.go:130] > # add_inheritable_capabilities = false
	I1216 06:32:25.252541 1633651 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1216 06:32:25.252547 1633651 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1216 06:32:25.252564 1633651 command_runner.go:130] > default_sysctls = [
	I1216 06:32:25.252577 1633651 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1216 06:32:25.252581 1633651 command_runner.go:130] > ]
	I1216 06:32:25.252587 1633651 command_runner.go:130] > # List of devices on the host that a
	I1216 06:32:25.252597 1633651 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1216 06:32:25.252601 1633651 command_runner.go:130] > # allowed_devices = [
	I1216 06:32:25.252605 1633651 command_runner.go:130] > # 	"/dev/fuse",
	I1216 06:32:25.252610 1633651 command_runner.go:130] > # 	"/dev/net/tun",
	I1216 06:32:25.252613 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.252624 1633651 command_runner.go:130] > # List of additional devices. specified as
	I1216 06:32:25.252649 1633651 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1216 06:32:25.252661 1633651 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1216 06:32:25.252667 1633651 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1216 06:32:25.252677 1633651 command_runner.go:130] > # additional_devices = [
	I1216 06:32:25.252685 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.252691 1633651 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1216 06:32:25.252703 1633651 command_runner.go:130] > # cdi_spec_dirs = [
	I1216 06:32:25.252716 1633651 command_runner.go:130] > # 	"/etc/cdi",
	I1216 06:32:25.252739 1633651 command_runner.go:130] > # 	"/var/run/cdi",
	I1216 06:32:25.252743 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.252750 1633651 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1216 06:32:25.252759 1633651 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1216 06:32:25.252769 1633651 command_runner.go:130] > # Defaults to false.
	I1216 06:32:25.252779 1633651 command_runner.go:130] > # device_ownership_from_security_context = false
	I1216 06:32:25.252786 1633651 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1216 06:32:25.252792 1633651 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1216 06:32:25.252807 1633651 command_runner.go:130] > # hooks_dir = [
	I1216 06:32:25.252819 1633651 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1216 06:32:25.252823 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.252829 1633651 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1216 06:32:25.252851 1633651 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1216 06:32:25.252857 1633651 command_runner.go:130] > # its default mounts from the following two files:
	I1216 06:32:25.252863 1633651 command_runner.go:130] > #
	I1216 06:32:25.252870 1633651 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1216 06:32:25.252876 1633651 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1216 06:32:25.252882 1633651 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1216 06:32:25.252886 1633651 command_runner.go:130] > #
	I1216 06:32:25.252893 1633651 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1216 06:32:25.252917 1633651 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1216 06:32:25.252940 1633651 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1216 06:32:25.252947 1633651 command_runner.go:130] > #      only add mounts it finds in this file.
	I1216 06:32:25.252950 1633651 command_runner.go:130] > #
	I1216 06:32:25.252955 1633651 command_runner.go:130] > # default_mounts_file = ""
	I1216 06:32:25.252963 1633651 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1216 06:32:25.252970 1633651 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1216 06:32:25.252977 1633651 command_runner.go:130] > # pids_limit = -1
	I1216 06:32:25.252989 1633651 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1216 06:32:25.253005 1633651 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1216 06:32:25.253018 1633651 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1216 06:32:25.253043 1633651 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1216 06:32:25.253055 1633651 command_runner.go:130] > # log_size_max = -1
	I1216 06:32:25.253064 1633651 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1216 06:32:25.253068 1633651 command_runner.go:130] > # log_to_journald = false
	I1216 06:32:25.253080 1633651 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1216 06:32:25.253090 1633651 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1216 06:32:25.253096 1633651 command_runner.go:130] > # Path to directory for container attach sockets.
	I1216 06:32:25.253101 1633651 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1216 06:32:25.253123 1633651 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1216 06:32:25.253128 1633651 command_runner.go:130] > # bind_mount_prefix = ""
	I1216 06:32:25.253151 1633651 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1216 06:32:25.253157 1633651 command_runner.go:130] > # read_only = false
	I1216 06:32:25.253169 1633651 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1216 06:32:25.253183 1633651 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1216 06:32:25.253188 1633651 command_runner.go:130] > # live configuration reload.
	I1216 06:32:25.253196 1633651 command_runner.go:130] > # log_level = "info"
	I1216 06:32:25.253219 1633651 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1216 06:32:25.253232 1633651 command_runner.go:130] > # This option supports live configuration reload.
	I1216 06:32:25.253236 1633651 command_runner.go:130] > # log_filter = ""
	I1216 06:32:25.253252 1633651 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1216 06:32:25.253264 1633651 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1216 06:32:25.253273 1633651 command_runner.go:130] > # separated by comma.
	I1216 06:32:25.253281 1633651 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1216 06:32:25.253287 1633651 command_runner.go:130] > # uid_mappings = ""
	I1216 06:32:25.253293 1633651 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1216 06:32:25.253300 1633651 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1216 06:32:25.253311 1633651 command_runner.go:130] > # separated by comma.
	I1216 06:32:25.253328 1633651 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1216 06:32:25.253340 1633651 command_runner.go:130] > # gid_mappings = ""
	I1216 06:32:25.253346 1633651 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1216 06:32:25.253362 1633651 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1216 06:32:25.253369 1633651 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1216 06:32:25.253377 1633651 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1216 06:32:25.253385 1633651 command_runner.go:130] > # minimum_mappable_uid = -1
	I1216 06:32:25.253391 1633651 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1216 06:32:25.253408 1633651 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1216 06:32:25.253421 1633651 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1216 06:32:25.253438 1633651 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1216 06:32:25.253448 1633651 command_runner.go:130] > # minimum_mappable_gid = -1
	I1216 06:32:25.253459 1633651 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1216 06:32:25.253468 1633651 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1216 06:32:25.253475 1633651 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1216 06:32:25.253481 1633651 command_runner.go:130] > # ctr_stop_timeout = 30
	I1216 06:32:25.253487 1633651 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1216 06:32:25.253493 1633651 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1216 06:32:25.253518 1633651 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1216 06:32:25.253530 1633651 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1216 06:32:25.253541 1633651 command_runner.go:130] > # drop_infra_ctr = true
	I1216 06:32:25.253557 1633651 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1216 06:32:25.253566 1633651 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1216 06:32:25.253573 1633651 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1216 06:32:25.253581 1633651 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1216 06:32:25.253607 1633651 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1216 06:32:25.253614 1633651 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1216 06:32:25.253630 1633651 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1216 06:32:25.253643 1633651 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1216 06:32:25.253647 1633651 command_runner.go:130] > # shared_cpuset = ""
	I1216 06:32:25.253653 1633651 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1216 06:32:25.253666 1633651 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1216 06:32:25.253670 1633651 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1216 06:32:25.253681 1633651 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1216 06:32:25.253688 1633651 command_runner.go:130] > # pinns_path = ""
	I1216 06:32:25.253694 1633651 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1216 06:32:25.253718 1633651 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1216 06:32:25.253731 1633651 command_runner.go:130] > # enable_criu_support = true
	I1216 06:32:25.253736 1633651 command_runner.go:130] > # Enable/disable the generation of the container,
	I1216 06:32:25.253754 1633651 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1216 06:32:25.253764 1633651 command_runner.go:130] > # enable_pod_events = false
	I1216 06:32:25.253771 1633651 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1216 06:32:25.253776 1633651 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1216 06:32:25.253786 1633651 command_runner.go:130] > # default_runtime = "crun"
	I1216 06:32:25.253795 1633651 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1216 06:32:25.253803 1633651 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1216 06:32:25.253814 1633651 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1216 06:32:25.253835 1633651 command_runner.go:130] > # creation as a file is not desired either.
	I1216 06:32:25.253853 1633651 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1216 06:32:25.253868 1633651 command_runner.go:130] > # the hostname is being managed dynamically.
	I1216 06:32:25.253876 1633651 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1216 06:32:25.253879 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.253885 1633651 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1216 06:32:25.253891 1633651 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1216 06:32:25.253923 1633651 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1216 06:32:25.253938 1633651 command_runner.go:130] > # Each entry in the table should follow the format:
	I1216 06:32:25.253941 1633651 command_runner.go:130] > #
	I1216 06:32:25.253946 1633651 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1216 06:32:25.253955 1633651 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1216 06:32:25.253959 1633651 command_runner.go:130] > # runtime_type = "oci"
	I1216 06:32:25.253977 1633651 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1216 06:32:25.253987 1633651 command_runner.go:130] > # inherit_default_runtime = false
	I1216 06:32:25.254007 1633651 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1216 06:32:25.254012 1633651 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1216 06:32:25.254016 1633651 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1216 06:32:25.254020 1633651 command_runner.go:130] > # monitor_env = []
	I1216 06:32:25.254034 1633651 command_runner.go:130] > # privileged_without_host_devices = false
	I1216 06:32:25.254044 1633651 command_runner.go:130] > # allowed_annotations = []
	I1216 06:32:25.254060 1633651 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1216 06:32:25.254072 1633651 command_runner.go:130] > # no_sync_log = false
	I1216 06:32:25.254076 1633651 command_runner.go:130] > # default_annotations = {}
	I1216 06:32:25.254081 1633651 command_runner.go:130] > # stream_websockets = false
	I1216 06:32:25.254088 1633651 command_runner.go:130] > # seccomp_profile = ""
	I1216 06:32:25.254142 1633651 command_runner.go:130] > # Where:
	I1216 06:32:25.254155 1633651 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1216 06:32:25.254162 1633651 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1216 06:32:25.254179 1633651 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1216 06:32:25.254193 1633651 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1216 06:32:25.254197 1633651 command_runner.go:130] > #   in $PATH.
	I1216 06:32:25.254203 1633651 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1216 06:32:25.254216 1633651 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1216 06:32:25.254223 1633651 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1216 06:32:25.254226 1633651 command_runner.go:130] > #   state.
	I1216 06:32:25.254232 1633651 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1216 06:32:25.254254 1633651 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1216 06:32:25.254272 1633651 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1216 06:32:25.254285 1633651 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1216 06:32:25.254290 1633651 command_runner.go:130] > #   the values from the default runtime on load time.
	I1216 06:32:25.254302 1633651 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1216 06:32:25.254311 1633651 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1216 06:32:25.254317 1633651 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1216 06:32:25.254340 1633651 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1216 06:32:25.254347 1633651 command_runner.go:130] > #   The currently recognized values are:
	I1216 06:32:25.254369 1633651 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1216 06:32:25.254378 1633651 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1216 06:32:25.254387 1633651 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1216 06:32:25.254393 1633651 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1216 06:32:25.254405 1633651 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1216 06:32:25.254419 1633651 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1216 06:32:25.254436 1633651 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1216 06:32:25.254450 1633651 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1216 06:32:25.254456 1633651 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1216 06:32:25.254476 1633651 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1216 06:32:25.254491 1633651 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1216 06:32:25.254498 1633651 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1216 06:32:25.254509 1633651 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1216 06:32:25.254520 1633651 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1216 06:32:25.254530 1633651 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1216 06:32:25.254561 1633651 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1216 06:32:25.254585 1633651 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1216 06:32:25.254596 1633651 command_runner.go:130] > #   deprecated option "conmon".
	I1216 06:32:25.254603 1633651 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1216 06:32:25.254613 1633651 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1216 06:32:25.254624 1633651 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1216 06:32:25.254629 1633651 command_runner.go:130] > #   should be moved to the container's cgroup
	I1216 06:32:25.254639 1633651 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1216 06:32:25.254660 1633651 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1216 06:32:25.254668 1633651 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1216 06:32:25.254672 1633651 command_runner.go:130] > #   conmon-rs by using:
	I1216 06:32:25.254689 1633651 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1216 06:32:25.254709 1633651 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1216 06:32:25.254724 1633651 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1216 06:32:25.254731 1633651 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1216 06:32:25.254739 1633651 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1216 06:32:25.254746 1633651 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1216 06:32:25.254767 1633651 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1216 06:32:25.254780 1633651 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1216 06:32:25.254799 1633651 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1216 06:32:25.254817 1633651 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1216 06:32:25.254822 1633651 command_runner.go:130] > #   when a machine crash happens.
	I1216 06:32:25.254829 1633651 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1216 06:32:25.254840 1633651 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1216 06:32:25.254848 1633651 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1216 06:32:25.254855 1633651 command_runner.go:130] > #   seccomp profile for the runtime.
	I1216 06:32:25.254861 1633651 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1216 06:32:25.254884 1633651 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1216 06:32:25.254894 1633651 command_runner.go:130] > #
	I1216 06:32:25.254899 1633651 command_runner.go:130] > # Using the seccomp notifier feature:
	I1216 06:32:25.254902 1633651 command_runner.go:130] > #
	I1216 06:32:25.254922 1633651 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1216 06:32:25.254936 1633651 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1216 06:32:25.254939 1633651 command_runner.go:130] > #
	I1216 06:32:25.254946 1633651 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1216 06:32:25.254954 1633651 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1216 06:32:25.254957 1633651 command_runner.go:130] > #
	I1216 06:32:25.254964 1633651 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1216 06:32:25.254970 1633651 command_runner.go:130] > # feature.
	I1216 06:32:25.254973 1633651 command_runner.go:130] > #
	I1216 06:32:25.254979 1633651 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1216 06:32:25.255001 1633651 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1216 06:32:25.255015 1633651 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1216 06:32:25.255021 1633651 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1216 06:32:25.255037 1633651 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1216 06:32:25.255046 1633651 command_runner.go:130] > #
	I1216 06:32:25.255053 1633651 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1216 06:32:25.255059 1633651 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1216 06:32:25.255065 1633651 command_runner.go:130] > #
	I1216 06:32:25.255071 1633651 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1216 06:32:25.255076 1633651 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1216 06:32:25.255079 1633651 command_runner.go:130] > #
	I1216 06:32:25.255089 1633651 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1216 06:32:25.255098 1633651 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1216 06:32:25.255116 1633651 command_runner.go:130] > # limitation.
	I1216 06:32:25.255127 1633651 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1216 06:32:25.255133 1633651 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1216 06:32:25.255143 1633651 command_runner.go:130] > runtime_type = ""
	I1216 06:32:25.255151 1633651 command_runner.go:130] > runtime_root = "/run/crun"
	I1216 06:32:25.255155 1633651 command_runner.go:130] > inherit_default_runtime = false
	I1216 06:32:25.255165 1633651 command_runner.go:130] > runtime_config_path = ""
	I1216 06:32:25.255174 1633651 command_runner.go:130] > container_min_memory = ""
	I1216 06:32:25.255210 1633651 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1216 06:32:25.255222 1633651 command_runner.go:130] > monitor_cgroup = "pod"
	I1216 06:32:25.255226 1633651 command_runner.go:130] > monitor_exec_cgroup = ""
	I1216 06:32:25.255231 1633651 command_runner.go:130] > allowed_annotations = [
	I1216 06:32:25.255235 1633651 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1216 06:32:25.255238 1633651 command_runner.go:130] > ]
	I1216 06:32:25.255247 1633651 command_runner.go:130] > privileged_without_host_devices = false
	I1216 06:32:25.255251 1633651 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1216 06:32:25.255267 1633651 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1216 06:32:25.255271 1633651 command_runner.go:130] > runtime_type = ""
	I1216 06:32:25.255274 1633651 command_runner.go:130] > runtime_root = "/run/runc"
	I1216 06:32:25.255290 1633651 command_runner.go:130] > inherit_default_runtime = false
	I1216 06:32:25.255300 1633651 command_runner.go:130] > runtime_config_path = ""
	I1216 06:32:25.255305 1633651 command_runner.go:130] > container_min_memory = ""
	I1216 06:32:25.255324 1633651 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1216 06:32:25.255354 1633651 command_runner.go:130] > monitor_cgroup = "pod"
	I1216 06:32:25.255360 1633651 command_runner.go:130] > monitor_exec_cgroup = ""
	I1216 06:32:25.255364 1633651 command_runner.go:130] > privileged_without_host_devices = false
	I1216 06:32:25.255371 1633651 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1216 06:32:25.255376 1633651 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1216 06:32:25.255383 1633651 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1216 06:32:25.255413 1633651 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1216 06:32:25.255438 1633651 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1216 06:32:25.255450 1633651 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1216 06:32:25.255462 1633651 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1216 06:32:25.255468 1633651 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1216 06:32:25.255478 1633651 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1216 06:32:25.255505 1633651 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1216 06:32:25.255522 1633651 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1216 06:32:25.255540 1633651 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1216 06:32:25.255551 1633651 command_runner.go:130] > # Example:
	I1216 06:32:25.255560 1633651 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1216 06:32:25.255569 1633651 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1216 06:32:25.255576 1633651 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1216 06:32:25.255584 1633651 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1216 06:32:25.255587 1633651 command_runner.go:130] > # cpuset = "0-1"
	I1216 06:32:25.255591 1633651 command_runner.go:130] > # cpushares = "5"
	I1216 06:32:25.255595 1633651 command_runner.go:130] > # cpuquota = "1000"
	I1216 06:32:25.255625 1633651 command_runner.go:130] > # cpuperiod = "100000"
	I1216 06:32:25.255636 1633651 command_runner.go:130] > # cpulimit = "35"
	I1216 06:32:25.255640 1633651 command_runner.go:130] > # Where:
	I1216 06:32:25.255645 1633651 command_runner.go:130] > # The workload name is workload-type.
	I1216 06:32:25.255652 1633651 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1216 06:32:25.255661 1633651 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1216 06:32:25.255667 1633651 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1216 06:32:25.255678 1633651 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1216 06:32:25.255686 1633651 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1216 06:32:25.255715 1633651 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1216 06:32:25.255733 1633651 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1216 06:32:25.255738 1633651 command_runner.go:130] > # Default value is set to true
	I1216 06:32:25.255749 1633651 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1216 06:32:25.255755 1633651 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1216 06:32:25.255760 1633651 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1216 06:32:25.255767 1633651 command_runner.go:130] > # Default value is set to 'false'
	I1216 06:32:25.255771 1633651 command_runner.go:130] > # disable_hostport_mapping = false
	I1216 06:32:25.255776 1633651 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1216 06:32:25.255807 1633651 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1216 06:32:25.255817 1633651 command_runner.go:130] > # timezone = ""
	I1216 06:32:25.255824 1633651 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1216 06:32:25.255830 1633651 command_runner.go:130] > #
	I1216 06:32:25.255836 1633651 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1216 06:32:25.255846 1633651 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1216 06:32:25.255850 1633651 command_runner.go:130] > [crio.image]
	I1216 06:32:25.255856 1633651 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1216 06:32:25.255866 1633651 command_runner.go:130] > # default_transport = "docker://"
	I1216 06:32:25.255888 1633651 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1216 06:32:25.255905 1633651 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1216 06:32:25.255915 1633651 command_runner.go:130] > # global_auth_file = ""
	I1216 06:32:25.255920 1633651 command_runner.go:130] > # The image used to instantiate infra containers.
	I1216 06:32:25.255925 1633651 command_runner.go:130] > # This option supports live configuration reload.
	I1216 06:32:25.255931 1633651 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1216 06:32:25.255940 1633651 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1216 06:32:25.255955 1633651 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1216 06:32:25.255961 1633651 command_runner.go:130] > # This option supports live configuration reload.
	I1216 06:32:25.255968 1633651 command_runner.go:130] > # pause_image_auth_file = ""
	I1216 06:32:25.255989 1633651 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1216 06:32:25.255997 1633651 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1216 06:32:25.256008 1633651 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1216 06:32:25.256014 1633651 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1216 06:32:25.256020 1633651 command_runner.go:130] > # pause_command = "/pause"
	I1216 06:32:25.256026 1633651 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1216 06:32:25.256032 1633651 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1216 06:32:25.256042 1633651 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1216 06:32:25.256057 1633651 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1216 06:32:25.256069 1633651 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1216 06:32:25.256085 1633651 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1216 06:32:25.256096 1633651 command_runner.go:130] > # pinned_images = [
	I1216 06:32:25.256100 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.256106 1633651 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1216 06:32:25.256116 1633651 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1216 06:32:25.256122 1633651 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1216 06:32:25.256131 1633651 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1216 06:32:25.256139 1633651 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1216 06:32:25.256144 1633651 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1216 06:32:25.256150 1633651 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1216 06:32:25.256179 1633651 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1216 06:32:25.256192 1633651 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1216 06:32:25.256207 1633651 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1216 06:32:25.256217 1633651 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1216 06:32:25.256222 1633651 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1216 06:32:25.256229 1633651 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1216 06:32:25.256238 1633651 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1216 06:32:25.256242 1633651 command_runner.go:130] > # changing them here.
	I1216 06:32:25.256266 1633651 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1216 06:32:25.256283 1633651 command_runner.go:130] > # insecure_registries = [
	I1216 06:32:25.256293 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.256303 1633651 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1216 06:32:25.256311 1633651 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1216 06:32:25.256321 1633651 command_runner.go:130] > # image_volumes = "mkdir"
	I1216 06:32:25.256331 1633651 command_runner.go:130] > # Temporary directory to use for storing big files
	I1216 06:32:25.256347 1633651 command_runner.go:130] > # big_files_temporary_dir = ""
	I1216 06:32:25.256360 1633651 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1216 06:32:25.256372 1633651 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1216 06:32:25.256380 1633651 command_runner.go:130] > # auto_reload_registries = false
	I1216 06:32:25.256386 1633651 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1216 06:32:25.256395 1633651 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1216 06:32:25.256404 1633651 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1216 06:32:25.256408 1633651 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1216 06:32:25.256422 1633651 command_runner.go:130] > # The mode of short name resolution.
	I1216 06:32:25.256436 1633651 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1216 06:32:25.256452 1633651 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1216 06:32:25.256479 1633651 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1216 06:32:25.256484 1633651 command_runner.go:130] > # short_name_mode = "enforcing"
	I1216 06:32:25.256490 1633651 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1216 06:32:25.256497 1633651 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1216 06:32:25.256512 1633651 command_runner.go:130] > # oci_artifact_mount_support = true
	I1216 06:32:25.256532 1633651 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1216 06:32:25.256544 1633651 command_runner.go:130] > # CNI plugins.
	I1216 06:32:25.256548 1633651 command_runner.go:130] > [crio.network]
	I1216 06:32:25.256566 1633651 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1216 06:32:25.256583 1633651 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1216 06:32:25.256590 1633651 command_runner.go:130] > # cni_default_network = ""
	I1216 06:32:25.256596 1633651 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1216 06:32:25.256603 1633651 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1216 06:32:25.256610 1633651 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1216 06:32:25.256626 1633651 command_runner.go:130] > # plugin_dirs = [
	I1216 06:32:25.256650 1633651 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1216 06:32:25.256654 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.256678 1633651 command_runner.go:130] > # List of included pod metrics.
	I1216 06:32:25.256691 1633651 command_runner.go:130] > # included_pod_metrics = [
	I1216 06:32:25.256695 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.256701 1633651 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1216 06:32:25.256708 1633651 command_runner.go:130] > [crio.metrics]
	I1216 06:32:25.256712 1633651 command_runner.go:130] > # Globally enable or disable metrics support.
	I1216 06:32:25.256717 1633651 command_runner.go:130] > # enable_metrics = false
	I1216 06:32:25.256723 1633651 command_runner.go:130] > # Specify enabled metrics collectors.
	I1216 06:32:25.256728 1633651 command_runner.go:130] > # Per default all metrics are enabled.
	I1216 06:32:25.256737 1633651 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1216 06:32:25.256762 1633651 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1216 06:32:25.256774 1633651 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1216 06:32:25.256778 1633651 command_runner.go:130] > # metrics_collectors = [
	I1216 06:32:25.256799 1633651 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1216 06:32:25.256808 1633651 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1216 06:32:25.256813 1633651 command_runner.go:130] > # 	"containers_oom_total",
	I1216 06:32:25.256818 1633651 command_runner.go:130] > # 	"processes_defunct",
	I1216 06:32:25.256829 1633651 command_runner.go:130] > # 	"operations_total",
	I1216 06:32:25.256834 1633651 command_runner.go:130] > # 	"operations_latency_seconds",
	I1216 06:32:25.256839 1633651 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1216 06:32:25.256842 1633651 command_runner.go:130] > # 	"operations_errors_total",
	I1216 06:32:25.256847 1633651 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1216 06:32:25.256851 1633651 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1216 06:32:25.256855 1633651 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1216 06:32:25.256869 1633651 command_runner.go:130] > # 	"image_pulls_success_total",
	I1216 06:32:25.256888 1633651 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1216 06:32:25.256897 1633651 command_runner.go:130] > # 	"containers_oom_count_total",
	I1216 06:32:25.256901 1633651 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1216 06:32:25.256906 1633651 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1216 06:32:25.256913 1633651 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1216 06:32:25.256916 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.256923 1633651 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1216 06:32:25.256930 1633651 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1216 06:32:25.256944 1633651 command_runner.go:130] > # The port on which the metrics server will listen.
	I1216 06:32:25.256952 1633651 command_runner.go:130] > # metrics_port = 9090
	I1216 06:32:25.256958 1633651 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1216 06:32:25.256967 1633651 command_runner.go:130] > # metrics_socket = ""
	I1216 06:32:25.256972 1633651 command_runner.go:130] > # The certificate for the secure metrics server.
	I1216 06:32:25.256979 1633651 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1216 06:32:25.256987 1633651 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1216 06:32:25.257000 1633651 command_runner.go:130] > # certificate on any modification event.
	I1216 06:32:25.257004 1633651 command_runner.go:130] > # metrics_cert = ""
	I1216 06:32:25.257023 1633651 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1216 06:32:25.257034 1633651 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1216 06:32:25.257039 1633651 command_runner.go:130] > # metrics_key = ""
	I1216 06:32:25.257061 1633651 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1216 06:32:25.257070 1633651 command_runner.go:130] > [crio.tracing]
	I1216 06:32:25.257076 1633651 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1216 06:32:25.257080 1633651 command_runner.go:130] > # enable_tracing = false
	I1216 06:32:25.257088 1633651 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1216 06:32:25.257099 1633651 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1216 06:32:25.257111 1633651 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1216 06:32:25.257127 1633651 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1216 06:32:25.257138 1633651 command_runner.go:130] > # CRI-O NRI configuration.
	I1216 06:32:25.257142 1633651 command_runner.go:130] > [crio.nri]
	I1216 06:32:25.257156 1633651 command_runner.go:130] > # Globally enable or disable NRI.
	I1216 06:32:25.257167 1633651 command_runner.go:130] > # enable_nri = true
	I1216 06:32:25.257172 1633651 command_runner.go:130] > # NRI socket to listen on.
	I1216 06:32:25.257181 1633651 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1216 06:32:25.257193 1633651 command_runner.go:130] > # NRI plugin directory to use.
	I1216 06:32:25.257198 1633651 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1216 06:32:25.257205 1633651 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1216 06:32:25.257210 1633651 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1216 06:32:25.257218 1633651 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1216 06:32:25.257323 1633651 command_runner.go:130] > # nri_disable_connections = false
	I1216 06:32:25.257337 1633651 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1216 06:32:25.257342 1633651 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1216 06:32:25.257358 1633651 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1216 06:32:25.257370 1633651 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1216 06:32:25.257375 1633651 command_runner.go:130] > # NRI default validator configuration.
	I1216 06:32:25.257383 1633651 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1216 06:32:25.257393 1633651 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1216 06:32:25.257397 1633651 command_runner.go:130] > # can be restricted/rejected:
	I1216 06:32:25.257403 1633651 command_runner.go:130] > # - OCI hook injection
	I1216 06:32:25.257409 1633651 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1216 06:32:25.257417 1633651 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1216 06:32:25.257431 1633651 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1216 06:32:25.257443 1633651 command_runner.go:130] > # - adjustment of linux namespaces
	I1216 06:32:25.257465 1633651 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1216 06:32:25.257479 1633651 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1216 06:32:25.257485 1633651 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1216 06:32:25.257493 1633651 command_runner.go:130] > #
	I1216 06:32:25.257498 1633651 command_runner.go:130] > # [crio.nri.default_validator]
	I1216 06:32:25.257503 1633651 command_runner.go:130] > # nri_enable_default_validator = false
	I1216 06:32:25.257510 1633651 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1216 06:32:25.257516 1633651 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1216 06:32:25.257522 1633651 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1216 06:32:25.257549 1633651 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1216 06:32:25.257562 1633651 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1216 06:32:25.257568 1633651 command_runner.go:130] > # nri_validator_required_plugins = [
	I1216 06:32:25.257574 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.257593 1633651 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1216 06:32:25.257604 1633651 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1216 06:32:25.257609 1633651 command_runner.go:130] > [crio.stats]
	I1216 06:32:25.257639 1633651 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1216 06:32:25.257651 1633651 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1216 06:32:25.257655 1633651 command_runner.go:130] > # stats_collection_period = 0
	I1216 06:32:25.257662 1633651 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1216 06:32:25.257671 1633651 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1216 06:32:25.257675 1633651 command_runner.go:130] > # collection_period = 0
	I1216 06:32:25.259482 1633651 command_runner.go:130] ! time="2025-12-16T06:32:25.219727326Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1216 06:32:25.259512 1633651 command_runner.go:130] ! time="2025-12-16T06:32:25.219767515Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1216 06:32:25.259524 1633651 command_runner.go:130] ! time="2025-12-16T06:32:25.219798038Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1216 06:32:25.259536 1633651 command_runner.go:130] ! time="2025-12-16T06:32:25.219823548Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1216 06:32:25.259545 1633651 command_runner.go:130] ! time="2025-12-16T06:32:25.219901653Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:32:25.259556 1633651 command_runner.go:130] ! time="2025-12-16T06:32:25.220263616Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1216 06:32:25.259571 1633651 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1216 06:32:25.260036 1633651 cni.go:84] Creating CNI manager for ""
	I1216 06:32:25.260064 1633651 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 06:32:25.260092 1633651 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 06:32:25.260122 1633651 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-364120 NodeName:functional-364120 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 06:32:25.260297 1633651 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-364120"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 06:32:25.260383 1633651 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 06:32:25.268343 1633651 command_runner.go:130] > kubeadm
	I1216 06:32:25.268362 1633651 command_runner.go:130] > kubectl
	I1216 06:32:25.268366 1633651 command_runner.go:130] > kubelet
	I1216 06:32:25.268406 1633651 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 06:32:25.268462 1633651 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 06:32:25.276071 1633651 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1216 06:32:25.288575 1633651 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 06:32:25.300994 1633651 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1216 06:32:25.313670 1633651 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 06:32:25.317448 1633651 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1216 06:32:25.317550 1633651 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:32:25.453328 1633651 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:32:26.148228 1633651 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120 for IP: 192.168.49.2
	I1216 06:32:26.148252 1633651 certs.go:195] generating shared ca certs ...
	I1216 06:32:26.148269 1633651 certs.go:227] acquiring lock for ca certs: {Name:mkbf72d2e438185e2867d262e148d82e5455cccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:32:26.148410 1633651 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key
	I1216 06:32:26.148482 1633651 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key
	I1216 06:32:26.148493 1633651 certs.go:257] generating profile certs ...
	I1216 06:32:26.148601 1633651 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.key
	I1216 06:32:26.148663 1633651 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.key.a6be103a
	I1216 06:32:26.148727 1633651 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.key
	I1216 06:32:26.148740 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1216 06:32:26.148753 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1216 06:32:26.148765 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1216 06:32:26.148785 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1216 06:32:26.148802 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1216 06:32:26.148814 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1216 06:32:26.148830 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1216 06:32:26.148841 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1216 06:32:26.148892 1633651 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem (1338 bytes)
	W1216 06:32:26.148927 1633651 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255_empty.pem, impossibly tiny 0 bytes
	I1216 06:32:26.148935 1633651 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 06:32:26.148966 1633651 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem (1078 bytes)
	I1216 06:32:26.148996 1633651 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem (1123 bytes)
	I1216 06:32:26.149023 1633651 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem (1675 bytes)
	I1216 06:32:26.149078 1633651 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 06:32:26.149109 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> /usr/share/ca-certificates/15992552.pem
	I1216 06:32:26.149127 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:32:26.149143 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem -> /usr/share/ca-certificates/1599255.pem
	I1216 06:32:26.149727 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 06:32:26.167732 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 06:32:26.185872 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 06:32:26.203036 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 06:32:26.220347 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 06:32:26.238248 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 06:32:26.255572 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 06:32:26.272719 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 06:32:26.290975 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /usr/share/ca-certificates/15992552.pem (1708 bytes)
	I1216 06:32:26.308752 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 06:32:26.326261 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem --> /usr/share/ca-certificates/1599255.pem (1338 bytes)
	I1216 06:32:26.344085 1633651 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 06:32:26.357043 1633651 ssh_runner.go:195] Run: openssl version
	I1216 06:32:26.362895 1633651 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1216 06:32:26.363366 1633651 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15992552.pem
	I1216 06:32:26.370980 1633651 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15992552.pem /etc/ssl/certs/15992552.pem
	I1216 06:32:26.378519 1633651 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15992552.pem
	I1216 06:32:26.382213 1633651 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 16 06:24 /usr/share/ca-certificates/15992552.pem
	I1216 06:32:26.382261 1633651 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 06:24 /usr/share/ca-certificates/15992552.pem
	I1216 06:32:26.382313 1633651 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15992552.pem
	I1216 06:32:26.422786 1633651 command_runner.go:130] > 3ec20f2e
	I1216 06:32:26.423247 1633651 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 06:32:26.430703 1633651 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:32:26.437977 1633651 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 06:32:26.445376 1633651 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:32:26.449306 1633651 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 16 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:32:26.449352 1633651 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:32:26.449400 1633651 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:32:26.489732 1633651 command_runner.go:130] > b5213941
	I1216 06:32:26.490221 1633651 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 06:32:26.498231 1633651 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1599255.pem
	I1216 06:32:26.505778 1633651 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1599255.pem /etc/ssl/certs/1599255.pem
	I1216 06:32:26.513624 1633651 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1599255.pem
	I1216 06:32:26.517603 1633651 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 16 06:24 /usr/share/ca-certificates/1599255.pem
	I1216 06:32:26.517655 1633651 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 06:24 /usr/share/ca-certificates/1599255.pem
	I1216 06:32:26.517708 1633651 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1599255.pem
	I1216 06:32:26.558501 1633651 command_runner.go:130] > 51391683
	I1216 06:32:26.558962 1633651 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 06:32:26.566709 1633651 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 06:32:26.570687 1633651 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 06:32:26.570714 1633651 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1216 06:32:26.570721 1633651 command_runner.go:130] > Device: 259,1	Inode: 1064557     Links: 1
	I1216 06:32:26.570728 1633651 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1216 06:32:26.570734 1633651 command_runner.go:130] > Access: 2025-12-16 06:28:17.989070314 +0000
	I1216 06:32:26.570739 1633651 command_runner.go:130] > Modify: 2025-12-16 06:24:14.133380006 +0000
	I1216 06:32:26.570745 1633651 command_runner.go:130] > Change: 2025-12-16 06:24:14.133380006 +0000
	I1216 06:32:26.570750 1633651 command_runner.go:130] >  Birth: 2025-12-16 06:24:14.133380006 +0000
	I1216 06:32:26.570807 1633651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 06:32:26.611178 1633651 command_runner.go:130] > Certificate will not expire
	I1216 06:32:26.611643 1633651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 06:32:26.653044 1633651 command_runner.go:130] > Certificate will not expire
	I1216 06:32:26.653496 1633651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 06:32:26.693948 1633651 command_runner.go:130] > Certificate will not expire
	I1216 06:32:26.694452 1633651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 06:32:26.737177 1633651 command_runner.go:130] > Certificate will not expire
	I1216 06:32:26.737685 1633651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 06:32:26.777863 1633651 command_runner.go:130] > Certificate will not expire
	I1216 06:32:26.778315 1633651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 06:32:26.821770 1633651 command_runner.go:130] > Certificate will not expire
	I1216 06:32:26.822198 1633651 kubeadm.go:401] StartCluster: {Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:32:26.822282 1633651 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 06:32:26.822342 1633651 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 06:32:26.848560 1633651 cri.go:89] found id: ""
	I1216 06:32:26.848631 1633651 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 06:32:26.856311 1633651 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1216 06:32:26.856334 1633651 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1216 06:32:26.856341 1633651 command_runner.go:130] > /var/lib/minikube/etcd:
	I1216 06:32:26.856353 1633651 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 06:32:26.856377 1633651 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 06:32:26.856451 1633651 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 06:32:26.863716 1633651 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 06:32:26.864139 1633651 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-364120" does not appear in /home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:32:26.864257 1633651 kubeconfig.go:62] /home/jenkins/minikube-integration/22141-1596013/kubeconfig needs updating (will repair): [kubeconfig missing "functional-364120" cluster setting kubeconfig missing "functional-364120" context setting]
	I1216 06:32:26.864570 1633651 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/kubeconfig: {Name:mk61a8e87d869d27c5acc78145bae6b02a8088a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:32:26.865235 1633651 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:32:26.865467 1633651 kapi.go:59] client config for functional-364120: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.crt", KeyFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.key", CAFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 06:32:26.866570 1633651 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1216 06:32:26.866631 1633651 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1216 06:32:26.866668 1633651 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 06:32:26.866693 1633651 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1216 06:32:26.866720 1633651 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 06:32:26.867179 1633651 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 06:32:26.868151 1633651 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1216 06:32:26.877051 1633651 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1216 06:32:26.877090 1633651 kubeadm.go:602] duration metric: took 20.700092ms to restartPrimaryControlPlane
	I1216 06:32:26.877101 1633651 kubeadm.go:403] duration metric: took 54.908954ms to StartCluster
	I1216 06:32:26.877118 1633651 settings.go:142] acquiring lock: {Name:mk011eec7aa10b3db81dce3dc7edf51f985e2ce2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:32:26.877187 1633651 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:32:26.877859 1633651 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/kubeconfig: {Name:mk61a8e87d869d27c5acc78145bae6b02a8088a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:32:26.878064 1633651 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 06:32:26.878625 1633651 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 06:32:26.878682 1633651 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 06:32:26.878749 1633651 addons.go:70] Setting storage-provisioner=true in profile "functional-364120"
	I1216 06:32:26.878762 1633651 addons.go:239] Setting addon storage-provisioner=true in "functional-364120"
	I1216 06:32:26.878787 1633651 host.go:66] Checking if "functional-364120" exists ...
	I1216 06:32:26.879288 1633651 cli_runner.go:164] Run: docker container inspect functional-364120 --format={{.State.Status}}
	I1216 06:32:26.879473 1633651 addons.go:70] Setting default-storageclass=true in profile "functional-364120"
	I1216 06:32:26.879497 1633651 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-364120"
	I1216 06:32:26.879803 1633651 cli_runner.go:164] Run: docker container inspect functional-364120 --format={{.State.Status}}
	I1216 06:32:26.884633 1633651 out.go:179] * Verifying Kubernetes components...
	I1216 06:32:26.887314 1633651 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:32:26.918200 1633651 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 06:32:26.919874 1633651 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:32:26.920155 1633651 kapi.go:59] client config for functional-364120: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.crt", KeyFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.key", CAFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 06:32:26.920453 1633651 addons.go:239] Setting addon default-storageclass=true in "functional-364120"
	I1216 06:32:26.920538 1633651 host.go:66] Checking if "functional-364120" exists ...
	I1216 06:32:26.920986 1633651 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:26.921004 1633651 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 06:32:26.921061 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:26.921340 1633651 cli_runner.go:164] Run: docker container inspect functional-364120 --format={{.State.Status}}
	I1216 06:32:26.964659 1633651 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:26.964697 1633651 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 06:32:26.964756 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:26.965286 1633651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:32:26.998084 1633651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:32:27.098293 1633651 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:32:27.125997 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:27.132422 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:27.897996 1633651 node_ready.go:35] waiting up to 6m0s for node "functional-364120" to be "Ready" ...
	I1216 06:32:27.898129 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:27.898194 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:27.898417 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:27.898455 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:27.898484 1633651 retry.go:31] will retry after 293.203887ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:27.898523 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:27.898548 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:27.898555 1633651 retry.go:31] will retry after 361.667439ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:27.898617 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:28.192028 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:28.251245 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:28.251292 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:28.251318 1633651 retry.go:31] will retry after 421.770055ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:28.261399 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:28.326104 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:28.326166 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:28.326190 1633651 retry.go:31] will retry after 230.03946ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:28.398272 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:28.398369 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:28.398664 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:28.557150 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:28.610627 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:28.614370 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:28.614405 1633651 retry.go:31] will retry after 431.515922ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:28.673577 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:28.751124 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:28.751167 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:28.751187 1633651 retry.go:31] will retry after 416.921651ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:28.898406 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:28.898526 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:28.898876 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:29.046157 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:29.107254 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:29.107314 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:29.107371 1633651 retry.go:31] will retry after 899.303578ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:29.168518 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:29.225793 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:29.229337 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:29.229371 1633651 retry.go:31] will retry after 758.152445ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:29.398643 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:29.398767 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:29.399082 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:29.898862 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:29.898939 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:29.899317 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:29.899390 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:29.988648 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:30.011610 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:30.113177 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:30.113245 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:30.113269 1633651 retry.go:31] will retry after 739.984539ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:30.134431 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:30.134488 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:30.134525 1633651 retry.go:31] will retry after 743.078754ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:30.398873 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:30.398944 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:30.399345 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:30.854128 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:30.878717 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:30.899202 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:30.899283 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:30.899567 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:30.948589 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:30.948629 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:30.948651 1633651 retry.go:31] will retry after 2.54132752s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:30.989038 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:30.989082 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:30.989107 1633651 retry.go:31] will retry after 1.925489798s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:31.398656 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:31.398729 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:31.399083 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:31.898637 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:31.898714 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:31.899058 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:32.398954 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:32.399038 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:32.399384 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:32.399469 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:32.898198 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:32.898298 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:32.898691 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:32.914948 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:32.974729 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:32.974766 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:32.974784 1633651 retry.go:31] will retry after 2.13279976s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:33.398213 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:33.398308 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:33.398682 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:33.491042 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:33.546485 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:33.550699 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:33.550734 1633651 retry.go:31] will retry after 1.927615537s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:33.899219 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:33.899329 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:33.899638 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:34.398293 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:34.398367 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:34.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:34.898296 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:34.898376 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:34.898683 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:34.898732 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:35.108136 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:35.168080 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:35.168179 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:35.168237 1633651 retry.go:31] will retry after 2.609957821s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:35.398216 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:35.398310 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:35.398589 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:35.478854 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:35.539410 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:35.539453 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:35.539472 1633651 retry.go:31] will retry after 2.66810674s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:35.898940 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:35.899019 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:35.899395 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:36.399231 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:36.399312 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:36.399638 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:36.898470 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:36.898542 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:36.898811 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:36.898864 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:37.398807 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:37.398884 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:37.399243 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:37.778747 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:37.833515 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:37.837237 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:37.837278 1633651 retry.go:31] will retry after 4.537651284s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:37.898560 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:37.898639 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:37.898976 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:38.208455 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:38.268308 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:38.268354 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:38.268373 1633651 retry.go:31] will retry after 8.612374195s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:38.398733 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:38.398807 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:38.399077 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:38.899000 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:38.899085 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:38.899556 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:38.899628 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:39.398306 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:39.398389 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:39.398769 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:39.898353 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:39.898421 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:39.898737 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:40.398303 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:40.398378 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:40.398718 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:40.898499 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:40.898578 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:40.898878 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:41.398243 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:41.398320 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:41.398608 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:41.398654 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:41.898265 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:41.898352 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:41.898706 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:42.375464 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:42.399185 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:42.399260 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:42.399531 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:42.439480 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:42.439520 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:42.439538 1633651 retry.go:31] will retry after 13.723834965s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:42.899110 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:42.899183 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:42.899457 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:43.398171 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:43.398246 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:43.398594 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:43.898302 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:43.898384 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:43.898716 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:43.898766 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:44.398246 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:44.398336 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:44.398652 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:44.898379 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:44.898453 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:44.898773 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:45.398303 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:45.398383 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:45.398795 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:45.898225 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:45.898296 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:45.898604 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:46.398309 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:46.398384 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:46.398732 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:46.398787 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:46.881536 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:46.898964 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:46.899056 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:46.899361 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:46.940375 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:46.943961 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:46.943995 1633651 retry.go:31] will retry after 5.072276608s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:47.398701 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:47.398787 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:47.399064 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:47.898839 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:47.898914 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:47.899236 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:48.398915 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:48.398993 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:48.399340 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:48.399397 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:48.898996 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:48.899069 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:48.899401 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:49.399214 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:49.399301 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:49.399707 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:49.898281 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:49.898365 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:49.898709 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:50.398392 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:50.398466 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:50.398735 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:50.898279 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:50.898378 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:50.898713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:50.898770 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:51.398286 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:51.398367 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:51.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:51.898253 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:51.898327 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:51.898592 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:52.017198 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:52.080330 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:52.080367 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:52.080387 1633651 retry.go:31] will retry after 19.488213597s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:52.398170 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:52.398254 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:52.398603 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:52.898357 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:52.898430 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:52.898751 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:52.898809 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:53.398443 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:53.398509 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:53.398780 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:53.898306 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:53.898387 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:53.898746 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:54.398455 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:54.398531 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:54.398859 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:54.898536 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:54.898616 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:54.898937 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:54.899000 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:55.398275 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:55.398355 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:55.398711 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:55.898280 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:55.898356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:55.898712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:56.164267 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:56.225232 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:56.225280 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:56.225300 1633651 retry.go:31] will retry after 14.108855756s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:56.398529 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:56.398594 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:56.398865 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:56.898855 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:56.898932 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:56.899282 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:56.899334 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:57.399213 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:57.399288 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:57.399591 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:57.898226 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:57.898296 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:57.898568 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:58.398287 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:58.398378 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:58.398747 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:58.898457 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:58.898545 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:58.898936 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:59.398231 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:59.398328 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:59.398650 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:59.398702 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:59.898313 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:59.898388 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:59.898742 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:00.398460 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:00.398541 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:00.398851 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:00.898739 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:00.898816 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:00.899097 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:01.398863 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:01.398936 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:01.399252 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:01.399305 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:01.898923 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:01.899005 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:01.899364 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:02.399175 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:02.399247 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:02.399610 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:02.898189 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:02.898266 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:02.898584 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:03.398333 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:03.398410 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:03.398779 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:03.898460 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:03.898527 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:03.898800 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:03.898847 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:04.398287 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:04.398376 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:04.398745 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:04.898458 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:04.898534 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:04.898848 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:05.398531 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:05.398614 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:05.398881 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:05.898633 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:05.898709 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:05.899055 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:05.899137 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:06.398909 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:06.398987 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:06.399357 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:06.898176 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:06.898262 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:06.898675 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:07.398306 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:07.398386 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:07.398760 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:07.898344 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:07.898420 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:07.898721 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:08.398282 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:08.398349 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:08.398667 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:08.398725 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:08.898267 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:08.898349 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:08.898696 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:09.398398 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:09.398479 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:09.398785 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:09.898336 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:09.898404 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:09.898666 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:10.335122 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:33:10.396460 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:33:10.396519 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:33:10.396538 1633651 retry.go:31] will retry after 12.344116424s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:33:10.398561 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:10.398627 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:10.398890 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:10.398937 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:10.898605 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:10.898693 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:10.899053 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:11.398802 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:11.398885 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:11.399176 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:11.569711 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:33:11.631078 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:33:11.634606 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:33:11.634637 1633651 retry.go:31] will retry after 14.712851021s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:33:11.899031 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:11.899113 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:11.899432 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:12.398254 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:12.398360 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:12.398690 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:12.898240 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:12.898312 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:12.898566 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:12.898607 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:13.398287 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:13.398402 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:13.398698 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:13.898274 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:13.898358 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:13.898708 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:14.398404 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:14.398483 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:14.398747 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:14.898318 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:14.898393 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:14.898689 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:14.898742 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:15.398247 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:15.398323 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:15.398677 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:15.898226 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:15.898327 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:15.898637 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:16.398242 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:16.398318 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:16.398644 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:16.898635 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:16.898716 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:16.899100 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:16.899164 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:17.398918 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:17.399005 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:17.399287 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:17.899071 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:17.899230 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:17.899613 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:18.398204 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:18.398291 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:18.398652 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:18.898350 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:18.898425 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:18.898684 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:19.398280 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:19.398356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:19.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:19.398764 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:19.898239 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:19.898318 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:19.898648 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:20.398232 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:20.398306 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:20.398616 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:20.898284 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:20.898360 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:20.898678 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:21.398294 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:21.398375 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:21.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:21.898275 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:21.898388 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:21.898665 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:21.898722 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:22.398602 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:22.398676 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:22.399053 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:22.741700 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:33:22.805176 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:33:22.805212 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:33:22.805230 1633651 retry.go:31] will retry after 37.521073757s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:33:22.898475 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:22.898570 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:22.898876 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:23.398233 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:23.398311 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:23.398648 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:23.898274 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:23.898357 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:23.898694 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:23.898753 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:24.398440 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:24.398517 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:24.398859 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:24.898547 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:24.898618 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:24.898926 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:25.398269 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:25.398343 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:25.398672 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:25.898264 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:25.898341 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:25.898639 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:26.348396 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:33:26.398844 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:26.398921 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:26.399279 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:26.399329 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:26.417393 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:33:26.417436 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:33:26.417455 1633651 retry.go:31] will retry after 31.35447413s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:33:26.898149 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:26.898223 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:26.898585 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:27.398341 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:27.398414 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:27.398760 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:27.898330 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:27.898422 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:27.898845 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:28.398266 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:28.398345 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:28.398712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:28.898417 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:28.898496 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:28.898819 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:28.898872 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:29.398235 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:29.398307 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:29.398632 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:29.898239 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:29.898320 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:29.898683 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:30.398392 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:30.398475 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:30.398830 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:30.898474 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:30.898549 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:30.898811 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:31.398256 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:31.398330 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:31.398672 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:31.398725 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:31.898251 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:31.898324 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:31.898636 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:32.398372 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:32.398442 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:32.398727 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:32.898400 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:32.898485 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:32.898850 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:33.398289 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:33.398371 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:33.398711 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:33.398769 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:33.898438 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:33.898505 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:33.898773 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:34.398438 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:34.398516 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:34.398867 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:34.898456 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:34.898537 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:34.898909 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:35.398591 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:35.398658 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:35.398916 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:35.398977 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:35.898278 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:35.898358 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:35.898703 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:36.398279 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:36.398364 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:36.398729 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:36.898728 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:36.898803 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:36.899137 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:37.399202 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:37.399278 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:37.399639 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:37.399694 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:37.898374 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:37.898455 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:37.898821 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:38.398505 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:38.398571 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:38.398855 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:38.898265 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:38.898344 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:38.898677 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:39.398411 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:39.398486 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:39.398839 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:39.898222 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:39.898300 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:39.898615 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:39.898667 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:40.398263 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:40.398339 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:40.398681 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:40.898277 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:40.898359 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:40.898740 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:41.398462 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:41.398529 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:41.398809 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:41.898281 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:41.898354 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:41.898706 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:41.898766 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:42.398755 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:42.398839 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:42.399236 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:42.898983 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:42.899053 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:42.899331 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:43.398183 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:43.398258 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:43.398591 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:43.898308 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:43.898391 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:43.898742 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:44.398253 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:44.398321 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:44.398580 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:44.398622 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:44.898252 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:44.898325 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:44.898659 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:45.398342 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:45.398448 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:45.398787 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:45.898225 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:45.898296 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:45.898628 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:46.398273 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:46.398350 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:46.398686 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:46.398739 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:46.898513 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:46.898594 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:46.898959 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:47.398772 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:47.398859 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:47.399168 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:47.898938 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:47.899012 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:47.899377 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:48.399044 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:48.399126 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:48.399458 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:48.399514 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:48.898185 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:48.898255 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:48.898520 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:49.398231 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:49.398311 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:49.398630 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:49.898360 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:49.898434 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:49.898761 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:50.398248 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:50.398333 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:50.398656 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:50.898256 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:50.898329 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:50.898694 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:50.898756 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:51.398426 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:51.398503 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:51.398913 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:51.898663 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:51.898743 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:51.899196 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:52.398565 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:52.398648 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:52.399111 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:52.898692 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:52.898773 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:52.899132 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:52.899190 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:53.398951 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:53.399065 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:53.399370 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:53.898173 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:53.898248 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:53.898623 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:54.398283 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:54.398370 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:54.398682 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:54.898239 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:54.898312 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:54.898573 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:55.398246 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:55.398320 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:55.398650 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:55.398707 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:55.898264 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:55.898343 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:55.898683 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:56.398258 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:56.398333 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:56.398586 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:56.898628 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:56.898703 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:56.899073 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:57.398945 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:57.399019 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:57.399371 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:57.399427 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:57.772952 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:33:57.834039 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:33:57.837641 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:33:57.837741 1633651 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 06:33:57.899083 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:57.899158 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:57.899422 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:58.398161 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:58.398242 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:58.398586 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:58.898310 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:58.898386 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:58.898742 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:59.398422 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:59.398493 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:59.398756 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:59.898258 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:59.898331 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:59.898686 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:59.898740 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:00.327789 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:34:00.398990 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:00.399071 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:00.399382 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:00.427909 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:34:00.431971 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:34:00.432103 1633651 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 06:34:00.437092 1633651 out.go:179] * Enabled addons: 
	I1216 06:34:00.440884 1633651 addons.go:530] duration metric: took 1m33.562192947s for enable addons: enabled=[]
	I1216 06:34:00.898292 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:00.898392 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:00.898707 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:01.398307 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:01.398389 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:01.398711 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:01.898244 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:01.898311 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:01.898577 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:02.398409 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:02.398488 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:02.398818 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:02.398876 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:02.898375 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:02.898452 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:02.898792 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:03.398249 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:03.398319 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:03.398577 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:03.898262 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:03.898340 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:03.898676 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:04.398263 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:04.398335 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:04.398654 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:04.898325 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:04.898400 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:04.898742 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:04.898801 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:05.398291 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:05.398382 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:05.398957 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:05.898686 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:05.898768 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:05.899122 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:06.398925 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:06.399010 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:06.399354 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:06.898972 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:06.899043 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:06.899401 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:06.899475 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:07.399211 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:07.399289 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:07.399665 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:07.898337 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:07.898421 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:07.898704 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:08.398265 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:08.398348 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:08.398682 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:08.898384 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:08.898460 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:08.898748 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:09.399015 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:09.399090 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:09.399360 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:09.399412 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:09.899197 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:09.899275 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:09.899628 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:10.398251 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:10.398324 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:10.398662 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:10.898348 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:10.898422 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:10.898716 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:11.398281 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:11.398362 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:11.398704 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:11.898290 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:11.898370 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:11.898687 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:11.898743 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:12.398541 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:12.398609 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:12.398881 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:12.898637 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:12.898723 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:12.899079 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:13.398865 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:13.398945 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:13.399273 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:13.899072 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:13.899151 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:13.899501 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:13.899561 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:14.398235 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:14.398316 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:14.398658 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:14.898363 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:14.898442 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:14.898813 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:15.398508 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:15.398583 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:15.398859 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:15.898280 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:15.898359 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:15.898662 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:16.398298 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:16.398373 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:16.398713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:16.398775 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:16.898203 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:16.898272 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:16.898528 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:17.398515 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:17.398598 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:17.398936 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:17.898289 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:17.898362 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:17.898713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:18.398422 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:18.398498 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:18.398771 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:18.398820 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:18.898251 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:18.898331 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:18.898653 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:19.398357 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:19.398446 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:19.398791 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:19.898510 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:19.898589 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:19.898872 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:20.398266 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:20.398359 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:20.398763 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:20.898254 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:20.898340 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:20.898695 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:20.898758 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:21.398239 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:21.398316 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:21.398590 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:21.898277 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:21.898350 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:21.898851 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:22.398811 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:22.398886 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:22.399204 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:22.898972 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:22.899048 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:22.899306 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:22.899351 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:23.399107 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:23.399181 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:23.399518 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:23.898252 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:23.898332 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:23.898659 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:24.398280 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:24.398364 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:24.398714 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:24.898279 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:24.898358 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:24.898719 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:25.398435 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:25.398518 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:25.398899 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:25.398964 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:25.898643 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:25.898718 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:25.898991 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:26.398257 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:26.398331 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:26.398659 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:26.898526 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:26.898612 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:26.899075 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:27.398249 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:27.398364 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:27.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:27.898275 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:27.898350 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:27.898713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:27.898798 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:28.398464 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:28.398539 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:28.398917 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:28.898624 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:28.898699 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:28.899014 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:29.398802 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:29.398878 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:29.399221 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:29.898995 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:29.899075 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:29.899431 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:29.899497 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:30.398215 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:30.398295 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:30.398549 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:30.898232 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:30.898309 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:30.898674 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:31.398411 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:31.398493 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:31.398835 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:31.898249 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:31.898315 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:31.898624 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:32.398296 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:32.398371 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:32.398696 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:32.398762 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:32.898447 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:32.898526 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:32.898844 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:33.398245 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:33.398318 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:33.398582 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:33.898259 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:33.898355 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:33.898652 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:34.398311 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:34.398386 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:34.398737 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:34.398791 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:34.898273 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:34.898347 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:34.898671 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:35.398244 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:35.398321 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:35.398665 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:35.898264 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:35.898348 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:35.898663 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:36.398349 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:36.398430 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:36.398756 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:36.898879 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:36.898962 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:36.899298 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:36.899363 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:37.398940 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:37.399018 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:37.399339 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:37.899128 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:37.899202 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:37.899475 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:38.398196 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:38.398276 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:38.398617 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:38.898346 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:38.898424 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:38.898788 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:39.398232 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:39.398304 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:39.398637 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:39.398705 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:39.898341 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:39.898419 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:39.898791 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:40.398499 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:40.398574 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:40.398963 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:40.898635 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:40.898719 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:40.899009 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:41.398866 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:41.398958 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:41.399281 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:41.399336 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:41.899108 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:41.899190 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:41.899541 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:42.398226 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:42.398314 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:42.398588 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:42.898199 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:42.898320 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:42.898686 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:43.398433 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:43.398510 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:43.398874 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:43.898570 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:43.898642 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:43.898913 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:43.898966 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:44.398296 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:44.398371 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:44.398701 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:44.898279 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:44.898356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:44.898693 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:45.398553 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:45.398755 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:45.399042 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:45.898881 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:45.898964 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:45.899318 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:45.899373 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:46.399167 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:46.399253 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:46.399612 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:46.898505 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:46.898584 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:46.898871 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:47.399034 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:47.399118 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:47.399524 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:47.898288 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:47.898367 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:47.898724 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:48.398399 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:48.398476 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:48.398811 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:48.398865 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:48.898261 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:48.898347 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:48.898763 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:49.398479 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:49.398552 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:49.398921 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:49.898219 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:49.898296 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:49.898632 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:50.398260 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:50.398336 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:50.398681 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:50.898398 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:50.898476 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:50.898813 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:50.898869 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:51.398266 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:51.398349 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:51.398666 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:51.898271 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:51.898358 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:51.898714 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:52.398270 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:52.398364 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:52.398695 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:52.898254 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:52.898333 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:52.898667 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:53.398349 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:53.398437 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:53.398787 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:53.398846 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:53.898285 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:53.898362 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:53.898735 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:54.398284 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:54.398352 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:54.398640 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:54.898273 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:54.898356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:54.898672 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:55.398276 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:55.398360 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:55.398754 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:55.898236 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:55.898309 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:55.898640 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:55.898694 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:56.398347 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:56.398429 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:56.398783 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:56.898669 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:56.898747 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:56.899097 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:57.399054 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:57.399128 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:57.399397 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:57.898166 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:57.898252 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:57.898582 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:58.398281 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:58.398356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:58.398693 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:58.398750 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:58.898237 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:58.898341 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:58.898734 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:59.398285 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:59.398370 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:59.398734 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:59.898309 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:59.898391 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:59.898719 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:00.414820 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:00.414906 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:00.415201 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:00.415247 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:00.899080 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:00.899160 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:00.899488 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:01.398203 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:01.398286 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:01.398658 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:01.898381 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:01.898453 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:01.898741 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:02.398760 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:02.398842 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:02.399198 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:02.898874 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:02.898953 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:02.899310 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:02.899364 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:03.399127 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:03.399199 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:03.399477 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:03.898183 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:03.898263 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:03.898574 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:04.398285 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:04.398363 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:04.398713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:04.898409 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:04.898488 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:04.898770 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:05.398281 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:05.398358 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:05.398689 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:05.398747 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:05.898283 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:05.898360 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:05.898696 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:06.398276 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:06.398344 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:06.398628 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:06.898700 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:06.898789 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:06.899156 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:07.399150 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:07.399230 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:07.399559 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:07.399618 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:07.898272 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:07.898347 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:07.898685 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:08.398266 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:08.398340 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:08.398691 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:08.898270 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:08.898346 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:08.898712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:09.398403 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:09.398475 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:09.398741 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:09.898260 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:09.898336 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:09.898699 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:09.898756 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:10.398423 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:10.398500 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:10.398892 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:10.898626 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:10.898722 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:10.899103 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:11.398911 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:11.399006 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:11.399384 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:11.898151 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:11.898224 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:11.898573 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:12.398258 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:12.398328 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:12.398616 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:12.398695 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:12.898253 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:12.898331 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:12.898683 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:13.398383 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:13.398463 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:13.398838 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:13.898531 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:13.898612 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:13.898894 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:14.398278 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:14.398350 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:14.398713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:14.398765 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:14.898300 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:14.898380 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:14.898747 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:15.398438 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:15.398508 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:15.398778 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:15.898259 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:15.898333 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:15.898664 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:16.398285 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:16.398368 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:16.398712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:16.898532 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:16.898606 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:16.898878 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:16.898924 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:17.398589 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:17.398661 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:17.398959 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:17.898673 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:17.898753 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:17.899078 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:18.398855 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:18.398925 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:18.399198 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:18.898973 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:18.899048 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:18.899383 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:18.899438 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:19.399095 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:19.399174 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:19.399532 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:19.898245 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:19.898323 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:19.898607 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:20.398269 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:20.398343 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:20.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:20.898294 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:20.898374 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:20.898722 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:21.398418 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:21.398486 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:21.398764 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:21.398806 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:21.898283 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:21.898369 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:21.898714 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:22.398289 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:22.398365 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:22.398713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:22.898223 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:22.898294 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:22.898644 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:23.398337 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:23.398411 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:23.398738 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:23.898470 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:23.898573 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:23.898929 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:23.898986 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:24.398626 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:24.398696 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:24.398974 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:24.898302 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:24.898387 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:24.898855 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:25.398279 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:25.398361 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:25.398718 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:25.898396 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:25.898463 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:25.898752 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:26.398331 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:26.398440 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:26.398776 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:26.398836 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:26.898830 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:26.898904 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:26.899295 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:27.399107 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:27.399188 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:27.399497 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:27.898182 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:27.898260 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:27.898590 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:28.398292 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:28.398373 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:28.398733 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:28.898316 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:28.898394 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:28.898672 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:28.898717 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:29.398311 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:29.398408 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:29.398849 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:29.898311 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:29.898399 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:29.898772 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:30.398270 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:30.398340 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:30.398653 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:30.898259 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:30.898340 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:30.898667 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:31.398286 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:31.398365 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:31.398695 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:31.398752 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:31.898305 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:31.898393 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:31.898777 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:32.398810 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:32.398883 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:32.399201 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:32.899041 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:32.899121 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:32.899453 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:33.398148 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:33.398223 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:33.398492 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:33.898278 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:33.898353 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:33.898736 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:33.898787 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:34.398447 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:34.398528 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:34.398873 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:34.898221 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:34.898312 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:34.898605 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:35.398303 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:35.398382 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:35.398734 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:35.898472 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:35.898554 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:35.898882 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:35.898940 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:36.398373 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:36.398454 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:36.398749 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:36.898854 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:36.898926 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:36.899222 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:37.398175 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:37.398272 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:37.398626 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:37.898231 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:37.898296 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:37.898642 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:38.398279 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:38.398350 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:38.398708 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:38.398766 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:38.898476 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:38.898554 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:38.898890 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:39.398379 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:39.398485 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:39.398800 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:39.898316 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:39.898391 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:39.898769 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:40.398507 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:40.398604 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:40.398907 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:40.398953 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:40.898245 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:40.898335 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:40.898635 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:41.398325 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:41.398402 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:41.398863 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:41.898282 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:41.898365 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:41.898709 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:42.398319 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:42.398385 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:42.398670 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:42.898305 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:42.898377 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:42.898704 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:42.898763 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:43.398283 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:43.398356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:43.398701 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:43.898384 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:43.898461 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:43.898733 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:44.398250 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:44.398345 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:44.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:44.898255 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:44.898335 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:44.898712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:45.398244 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:45.398321 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:45.398663 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:45.398717 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:45.898312 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:45.898398 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:45.898773 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:46.398512 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:46.398593 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:46.398928 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:46.898755 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:46.898837 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:46.899103 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:47.399074 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:47.399155 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:47.399470 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:47.399520 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:47.898232 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:47.898309 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:47.898683 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:48.398447 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:48.398547 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:48.398895 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:48.898281 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:48.898356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:48.898703 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:49.398425 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:49.398500 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:49.398876 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:49.898573 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:49.898645 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:49.899024 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:49.899073 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:50.398808 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:50.398884 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:50.399215 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:50.898894 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:50.898974 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:50.899314 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:51.399073 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:51.399145 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:51.399405 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:51.899204 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:51.899286 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:51.899637 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:51.899692 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:52.398394 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:52.398470 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:52.398814 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:52.898245 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:52.898334 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:52.898628 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:53.398281 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:53.398365 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:53.398736 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:53.898467 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:53.898549 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:53.898914 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:54.398587 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:54.398670 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:54.398930 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:54.398971 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:54.898283 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:54.898362 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:54.898706 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:55.398429 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:55.398501 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:55.398821 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:55.898239 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:55.898308 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:55.898643 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:56.398282 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:56.398367 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:56.398726 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:56.898588 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:56.898668 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:56.899021 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:56.899088 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:57.398828 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:57.398910 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:57.399188 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:57.898996 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:57.899073 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:57.899382 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:58.399133 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:58.399235 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:58.399594 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:58.898219 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:58.898452 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:58.898861 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:59.398261 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:59.398348 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:59.398686 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:59.398752 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:59.898268 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:59.898357 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:59.898715 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:00.399357 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:00.399435 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:00.399772 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:00.898475 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:00.898558 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:00.898912 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:01.398629 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:01.398704 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:01.399062 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:01.399123 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:01.898881 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:01.898960 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:01.899233 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:02.399234 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:02.399313 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:02.399704 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:02.898296 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:02.898382 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:02.898715 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:03.398263 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:03.398346 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:03.398641 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:03.898276 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:03.898359 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:03.898695 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:03.898751 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:04.398291 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:04.398413 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:04.398743 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:04.898440 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:04.898518 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:04.898790 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:05.398493 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:05.398570 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:05.398895 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:05.898635 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:05.898712 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:05.899049 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:05.899102 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:06.398845 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:06.398927 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:06.399275 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:06.899212 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:06.899287 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:06.899619 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:07.398278 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:07.398388 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:07.398739 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:07.898423 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:07.898501 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:07.898769 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:08.398271 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:08.398361 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:08.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:08.398759 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:08.898430 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:08.898507 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:08.898855 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:09.398214 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:09.398290 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:09.398601 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:09.898266 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:09.898350 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:09.898705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:10.398295 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:10.398377 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:10.398707 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:10.898349 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:10.898425 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:10.898702 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:10.898757 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:11.398292 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:11.398366 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:11.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:11.898435 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:11.898509 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:11.898839 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:12.398738 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:12.398804 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:12.399069 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:12.898825 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:12.898900 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:12.899217 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:12.899278 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:13.399064 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:13.399138 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:13.399479 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:13.898174 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:13.898254 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:13.898539 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:14.398296 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:14.398371 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:14.398712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:14.898437 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:14.898518 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:14.898877 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:15.398539 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:15.398617 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:15.398894 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:15.398947 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:15.898294 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:15.898402 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:15.898784 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:16.398330 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:16.398408 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:16.398731 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:16.898535 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:16.898609 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:16.898886 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:17.398882 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:17.398955 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:17.399291 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:17.399351 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:17.899139 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:17.899220 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:17.899551 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:18.398232 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:18.398362 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:18.398620 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:18.898277 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:18.898354 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:18.898649 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:19.398247 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:19.398346 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:19.398683 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:19.898387 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:19.898473 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:19.898758 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:19.898804 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:20.398296 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:20.398369 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:20.398689 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:20.898334 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:20.898415 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:20.898762 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:21.398456 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:21.398532 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:21.398795 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:21.898309 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:21.898383 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:21.898723 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:22.398748 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:22.398819 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:22.399287 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:22.399332 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:22.899045 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:22.899124 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:22.899438 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:23.398179 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:23.398299 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:23.398688 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:23.898298 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:23.898382 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:23.898729 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:24.398222 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:24.398296 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:24.398629 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:24.898320 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:24.898394 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:24.898747 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:24.898810 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:25.398296 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:25.398380 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:25.398720 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:25.898403 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:25.898472 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:25.898736 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:26.398281 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:26.398355 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:26.398676 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:26.898649 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:26.898727 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:26.899069 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:26.899125 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:27.398556 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:27.398654 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:27.398964 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:27.898756 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:27.898845 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:27.899194 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:28.398978 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:28.399057 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:28.399387 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:28.899171 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:28.899242 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:28.899511 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:28.899553 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:29.398265 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:29.398345 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:29.398698 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:29.898282 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:29.898357 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:29.898723 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:30.398293 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:30.398372 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:30.398656 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:30.898379 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:30.898467 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:30.898858 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:31.398431 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:31.398506 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:31.398844 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:31.398900 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:31.898545 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:31.898622 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:31.898916 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:32.398834 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:32.398911 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:32.399252 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:32.899021 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:32.899098 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:32.899424 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:33.398133 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:33.398202 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:33.398473 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:33.898147 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:33.898235 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:33.898584 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:33.898642 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:34.398163 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:34.398242 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:34.398591 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:34.898191 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:34.898275 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:34.898568 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:35.398271 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:35.398363 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:35.398707 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:35.898320 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:35.898407 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:35.898755 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:35.898810 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:36.398446 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:36.398521 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:36.398786 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:36.898729 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:36.898812 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:36.899129 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:37.399112 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:37.399185 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:37.399511 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:37.898225 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:37.898304 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:37.898568 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:38.398267 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:38.398343 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:38.398710 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:38.398764 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:38.898279 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:38.898353 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:38.898729 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:39.398240 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:39.398351 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:39.398667 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:39.898287 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:39.898369 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:39.898673 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:40.398360 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:40.398435 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:40.398766 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:40.398819 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:40.898241 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:40.898314 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:40.898637 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:41.398298 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:41.398376 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:41.398683 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:41.898412 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:41.898487 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:41.898821 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:42.398242 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:42.398318 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:42.398580 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:42.898280 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:42.898355 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:42.898692 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:42.898748 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:43.398416 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:43.398491 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:43.398846 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:43.898235 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:43.898329 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:43.898615 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:44.398291 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:44.398366 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:44.398722 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:44.898411 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:44.898483 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:44.898775 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:44.898824 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:45.398248 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:45.398345 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:45.398675 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:45.898365 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:45.898459 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:45.898837 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:46.398280 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:46.398361 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:46.398716 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:46.898502 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:46.898576 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:46.898840 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:46.898879 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:47.398781 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:47.398852 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:47.399176 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:47.898950 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:47.899024 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:47.899371 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:48.399121 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:48.399194 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:48.399456 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:48.899245 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:48.899322 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:48.899641 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:48.899693 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:49.398288 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:49.398370 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:49.398748 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:49.898250 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:49.898327 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:49.898652 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:50.398271 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:50.398347 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:50.398703 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:50.898421 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:50.898500 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:50.898849 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:51.398536 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:51.398624 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:51.398900 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:51.398944 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:51.898273 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:51.898353 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:51.898689 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:52.398314 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:52.398399 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:52.398737 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:52.898253 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:52.898349 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:52.898676 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:53.398281 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:53.398352 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:53.398708 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:53.898301 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:53.898380 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:53.898717 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:53.898780 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:54.398263 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:54.398358 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:54.398690 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:54.898290 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:54.898368 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:54.898745 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:55.398460 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:55.398541 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:55.398872 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:55.898241 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:55.898317 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:55.898573 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:56.398264 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:56.398341 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:56.398665 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:56.398721 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:56.898737 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:56.898816 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:56.899137 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:57.399000 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:57.399068 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:57.399335 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:57.899058 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:57.899134 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:57.899469 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:58.398223 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:58.398317 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:58.398690 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:58.398749 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:58.898385 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:58.898460 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:58.898722 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:59.398260 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:59.398337 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:59.398712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:59.898268 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:59.898343 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:59.898667 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:00.398403 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:00.398481 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:00.398778 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:00.398824 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:00.898298 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:00.898373 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:00.898697 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:01.398432 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:01.398511 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:01.398865 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:01.898261 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:01.898333 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:01.898600 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:02.398363 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:02.398458 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:02.398848 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:02.398903 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:02.898598 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:02.898677 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:02.899033 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:03.398801 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:03.398882 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:03.399146 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:03.898939 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:03.899014 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:03.899351 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:04.399028 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:04.399109 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:04.399429 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:04.399479 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:04.898171 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:04.898241 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:04.898523 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:05.398299 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:05.398375 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:05.398691 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:05.898283 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:05.898372 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:05.898673 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:06.398257 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:06.398336 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:06.398612 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:06.898577 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:06.898653 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:06.899006 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:06.899062 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:07.398886 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:07.398973 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:07.399304 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:07.899089 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:07.899159 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:07.899439 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:08.399244 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:08.399316 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:08.399642 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:08.898339 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:08.898425 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:08.898755 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:09.398430 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:09.398498 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:09.398754 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:09.398796 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:09.898279 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:09.898378 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:09.898704 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:10.398393 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:10.398469 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:10.398815 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:10.898372 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:10.898442 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:10.898709 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:11.398311 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:11.398389 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:11.398776 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:11.398848 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:11.898377 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:11.898455 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:11.898804 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:12.398256 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:12.398324 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:12.398587 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:12.898265 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:12.898339 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:12.898691 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:13.398375 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:13.398449 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:13.398799 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:13.898228 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:13.898308 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:13.898581 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:13.898622 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:14.398260 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:14.398340 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:14.398658 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:14.898262 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:14.898344 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:14.898707 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:15.398332 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:15.398408 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:15.398675 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:15.898289 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:15.898368 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:15.898652 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:15.898699 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:16.398290 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:16.398365 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:16.398727 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:16.898702 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:16.898784 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:16.899056 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:17.398983 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:17.399055 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:17.399412 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:17.899241 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:17.899319 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:17.899615 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:17.899667 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:18.398328 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:18.398395 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:18.398676 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:18.898311 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:18.898389 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:18.898756 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:19.398447 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:19.398552 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:19.398855 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:19.898524 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:19.898598 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:19.898881 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:20.398260 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:20.398339 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:20.398672 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:20.398727 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:20.898288 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:20.898361 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:20.898685 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:21.398238 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:21.398309 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:21.398582 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:21.898316 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:21.898391 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:21.898740 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:22.398286 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:22.398366 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:22.398717 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:22.398773 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:22.898431 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:22.898499 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:22.898765 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:23.398284 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:23.398368 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:23.398729 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:23.898447 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:23.898524 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:23.898868 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:24.398560 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:24.398637 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:24.398927 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:24.398969 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:24.898283 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:24.898357 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:24.898696 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:25.398285 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:25.398368 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:25.398721 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:25.898222 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:25.898307 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:25.898627 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:26.398280 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:26.398362 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:26.398712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:26.898725 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:26.898800 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:26.899142 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:26.899196 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:27.398976 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:27.399052 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:27.399314 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:27.899092 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:27.899164 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:27.899471 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:28.398223 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:28.398299 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:28.398602 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:28.898256 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:28.898325 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:28.898655 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:29.398287 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:29.398360 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:29.398693 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:29.398750 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:29.898408 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:29.898505 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:29.898906 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:30.398225 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:30.398302 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:30.398631 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:30.898286 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:30.898375 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:30.898730 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:31.398439 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:31.398517 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:31.398856 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:31.398911 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:31.898555 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:31.898623 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:31.898889 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:32.398937 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:32.399013 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:32.399352 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:32.899143 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:32.899220 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:32.899571 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:33.398155 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:33.398227 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:33.398484 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:33.898182 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:33.898255 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:33.898595 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:33.898651 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:34.398324 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:34.398396 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:34.398738 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:34.898420 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:34.898491 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:34.898769 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:35.398292 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:35.398369 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:35.398658 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:35.898356 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:35.898432 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:35.898728 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:35.898819 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:36.398478 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:36.398549 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:36.398814 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:36.898859 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:36.898933 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:36.899273 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:37.399136 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:37.399213 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:37.399567 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:37.898258 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:37.898329 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:37.898588 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:38.398300 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:38.398379 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:38.398666 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:38.398713 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:38.898281 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:38.898356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:38.898708 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:39.398215 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:39.398283 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:39.398608 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:39.898292 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:39.898365 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:39.898735 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:40.398290 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:40.398419 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:40.398713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:40.398761 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:40.898223 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:40.898291 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:40.898631 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:41.398327 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:41.398405 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:41.398732 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:41.898307 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:41.898393 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:41.898757 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:42.398724 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:42.398796 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:42.399059 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:42.399111 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:42.898855 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:42.898936 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:42.899284 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:43.399100 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:43.399176 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:43.399519 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:43.898212 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:43.898287 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:43.898548 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:44.398253 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:44.398333 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:44.398697 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:44.898401 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:44.898475 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:44.898804 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:44.898860 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:45.398241 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:45.398315 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:45.398573 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:45.898329 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:45.898404 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:45.898750 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:46.398288 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:46.398359 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:46.398673 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:46.898698 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:46.898768 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:46.899039 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:46.899080 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:47.398977 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:47.399049 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:47.399400 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:47.899044 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:47.899122 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:47.899468 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:48.398202 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:48.398275 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:48.398540 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:48.898231 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:48.898304 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:48.898650 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:49.398232 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:49.398318 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:49.398653 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:49.398711 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:49.898340 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:49.898415 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:49.898682 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:50.398255 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:50.398337 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:50.398634 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:50.898338 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:50.898429 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:50.898764 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:51.398436 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:51.398506 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:51.398820 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:51.398875 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:51.898249 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:51.898343 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:51.898647 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:52.398329 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:52.398423 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:52.398786 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:52.898247 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:52.898320 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:52.898634 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:53.398288 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:53.398360 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:53.398691 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:53.898307 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:53.898414 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:53.898758 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:53.898813 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:54.398461 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:54.398534 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:54.398794 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:54.898301 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:54.898376 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:54.898766 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:55.398305 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:55.398390 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:55.398708 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:55.898252 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:55.898321 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:55.898601 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:56.398276 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:56.398353 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:56.398704 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:56.398769 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:56.898725 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:56.898806 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:56.899207 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:57.398957 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:57.399027 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:57.399310 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:57.899115 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:57.899188 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:57.899518 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:58.398225 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:58.398296 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:58.398611 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:58.898289 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:58.898363 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:58.898624 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:58.898670 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:59.398281 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:59.398361 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:59.398712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:59.898427 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:59.898517 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:59.898807 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:00.398278 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:00.398364 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:00.399475 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 06:38:00.898197 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:00.898269 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:00.898604 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:01.398343 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:01.398423 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:01.398732 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:01.398781 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:01.898309 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:01.898387 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:01.898662 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:02.398274 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:02.398348 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:02.398666 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:02.898354 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:02.898429 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:02.898739 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:03.398239 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:03.398307 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:03.398615 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:03.898236 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:03.898311 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:03.898646 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:03.898700 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:04.398254 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:04.398336 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:04.398687 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:04.898364 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:04.898443 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:04.898706 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:05.398258 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:05.398338 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:05.398679 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:05.898384 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:05.898464 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:05.898794 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:05.898848 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:06.398478 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:06.398546 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:06.398819 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:06.898821 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:06.898898 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:06.899244 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:07.399095 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:07.399177 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:07.399526 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:07.898233 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:07.898305 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:07.898583 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:08.398273 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:08.398355 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:08.398689 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:08.398747 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:08.898439 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:08.898512 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:08.898861 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:09.398318 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:09.398389 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:09.398662 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:09.898282 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:09.898371 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:09.898697 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:10.398289 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:10.398372 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:10.398704 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:10.898271 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:10.898351 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:10.898646 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:10.898697 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:11.398274 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:11.398356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:11.398699 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:11.898267 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:11.898346 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:11.898692 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:12.398257 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:12.398345 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:12.398686 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:12.898310 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:12.898387 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:12.898713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:12.898765 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:13.398455 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:13.398532 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:13.398909 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:13.898601 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:13.898682 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:13.899003 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:14.398291 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:14.398366 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:14.398694 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:14.898453 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:14.898549 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:14.898911 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:14.898969 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:15.398256 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:15.398338 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:15.398607 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:15.898340 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:15.898416 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:15.898765 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:16.398312 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:16.398390 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:16.398677 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:16.898563 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:16.898635 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:16.898893 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:17.398825 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:17.398897 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:17.399203 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:17.399251 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:17.899015 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:17.899092 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:17.899429 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:18.399192 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:18.399272 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:18.399543 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:18.898305 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:18.898380 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:18.898701 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:19.398329 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:19.398405 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:19.398708 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:19.898230 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:19.898303 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:19.898634 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:19.898691 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:20.398293 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:20.398368 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:20.398701 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:20.898295 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:20.898370 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:20.898697 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:21.398453 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:21.398559 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:21.398856 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:21.898276 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:21.898357 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:21.898729 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:21.898782 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:22.398291 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:22.398376 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:22.398740 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:22.898287 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:22.898360 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:22.898617 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:23.398307 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:23.398396 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:23.398750 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:23.898299 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:23.898375 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:23.898725 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:24.398289 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:24.398356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:24.398635 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:24.398676 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:24.898264 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:24.898338 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:24.898687 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:25.398443 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:25.398523 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:25.398874 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:25.898588 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:25.898660 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:25.898920 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:26.398605 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:26.398677 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:26.399010 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:26.399063 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:26.898789 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:26.898863 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:26.899190 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:27.400218 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:27.400306 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:27.400637 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:27.898246 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:27.898312 1633651 node_ready.go:38] duration metric: took 6m0.000267561s for node "functional-364120" to be "Ready" ...
	I1216 06:38:27.901509 1633651 out.go:203] 
	W1216 06:38:27.904340 1633651 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1216 06:38:27.904359 1633651 out.go:285] * 
	W1216 06:38:27.906499 1633651 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 06:38:27.909191 1633651 out.go:203] 
	
	
	==> CRI-O <==
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.932317848Z" level=info msg="Using the internal default seccomp profile"
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.932325314Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.932333569Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.932339411Z" level=info msg="RDT not available in the host system"
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.932352063Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.933182179Z" level=info msg="Conmon does support the --sync option"
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.933208198Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.933225937Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.933900902Z" level=info msg="Conmon does support the --sync option"
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.933921595Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.934056086Z" level=info msg="Updated default CNI network name to "
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.934625401Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oc
i/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_
memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_d
ir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [c
rio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.934995232Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.935049066Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.989476581Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.989512372Z" level=info msg="Starting seccomp notifier watcher"
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.989552889Z" level=info msg="Create NRI interface"
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.989649866Z" level=info msg="built-in NRI default validator is disabled"
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.989658424Z" level=info msg="runtime interface created"
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.989668697Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.989675409Z" level=info msg="runtime interface starting up..."
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.98968171Z" level=info msg="starting plugins..."
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.98969387Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.989753948Z" level=info msg="No systemd watchdog enabled"
	Dec 16 06:32:24 functional-364120 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:38:29.804612    8550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:38:29.805149    8550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:38:29.806825    8550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:38:29.807336    8550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:38:29.809007    8550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 06:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec16 06:13] overlayfs: idmapped layers are currently not supported
	[Dec16 06:19] overlayfs: idmapped layers are currently not supported
	[Dec16 06:20] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 06:38:29 up  9:21,  0 user,  load average: 0.20, 0.28, 0.78
	Linux functional-364120 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 06:38:27 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:38:28 functional-364120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1136.
	Dec 16 06:38:28 functional-364120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:38:28 functional-364120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:38:28 functional-364120 kubelet[8436]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:38:28 functional-364120 kubelet[8436]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:38:28 functional-364120 kubelet[8436]: E1216 06:38:28.210258    8436 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:38:28 functional-364120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:38:28 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:38:28 functional-364120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1137.
	Dec 16 06:38:28 functional-364120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:38:28 functional-364120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:38:28 functional-364120 kubelet[8457]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:38:28 functional-364120 kubelet[8457]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:38:28 functional-364120 kubelet[8457]: E1216 06:38:28.953919    8457 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:38:28 functional-364120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:38:28 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:38:29 functional-364120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1138.
	Dec 16 06:38:29 functional-364120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:38:29 functional-364120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:38:29 functional-364120 kubelet[8528]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:38:29 functional-364120 kubelet[8528]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:38:29 functional-364120 kubelet[8528]: E1216 06:38:29.718473    8528 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:38:29 functional-364120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:38:29 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-364120 -n functional-364120
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-364120 -n functional-364120: exit status 2 (343.273726ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-364120" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (369.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-364120 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-364120 get po -A: exit status 1 (64.867606ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-364120 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-364120 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-364120 get po -A"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-364120
helpers_test.go:244: (dbg) docker inspect functional-364120:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf",
	        "Created": "2025-12-16T06:24:05.281524036Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1628059,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T06:24:05.346294886Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/hostname",
	        "HostsPath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/hosts",
	        "LogPath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf-json.log",
	        "Name": "/functional-364120",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-364120:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-364120",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf",
	                "LowerDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752-init/diff:/var/lib/docker/overlay2/bf9e5e3f04a34ae52d17b5e81aeacb3854428b2bda7b4fcb7e1d86558db759ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752/merged",
	                "UpperDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752/diff",
	                "WorkDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-364120",
	                "Source": "/var/lib/docker/volumes/functional-364120/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-364120",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-364120",
	                "name.minikube.sigs.k8s.io": "functional-364120",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ca8e444af5ea4dc220aae407b23205e89ee2c7bfaf0d7da28c0fa8a6e9438a0b",
	            "SandboxKey": "/var/run/docker/netns/ca8e444af5ea",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34260"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34261"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34264"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34262"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34263"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-364120": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:28:ec:c3:f0:f5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a6847428577f52c75d7f6ab7a92b3395c1204da1608971d5af98d3898a2210da",
	                    "EndpointID": "e579fd8a0ba117da836073d37b7f617933568bedfc3fb52e056b4772aaddecbf",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-364120",
	                        "8e0dcfb5d015"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-364120 -n functional-364120
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-364120 -n functional-364120: exit status 2 (329.437982ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-364120 logs -n 25: (1.01608784s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-487532 ssh sudo cat /etc/ssl/certs/1599255.pem                                                                                         │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image          │ functional-487532 image rm kicbase/echo-server:functional-487532 --alsologtostderr                                                                │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh            │ functional-487532 ssh sudo cat /usr/share/ca-certificates/1599255.pem                                                                             │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image          │ functional-487532 image ls                                                                                                                        │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image          │ functional-487532 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                               │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh            │ functional-487532 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                          │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh            │ functional-487532 ssh sudo cat /etc/ssl/certs/15992552.pem                                                                                        │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image          │ functional-487532 image ls                                                                                                                        │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh            │ functional-487532 ssh sudo cat /usr/share/ca-certificates/15992552.pem                                                                            │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image          │ functional-487532 image save --daemon kicbase/echo-server:functional-487532 --alsologtostderr                                                     │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh            │ functional-487532 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                          │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh            │ functional-487532 ssh sudo cat /etc/test/nested/copy/1599255/hosts                                                                                │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image          │ functional-487532 image ls --format short --alsologtostderr                                                                                       │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image          │ functional-487532 image ls --format yaml --alsologtostderr                                                                                        │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh            │ functional-487532 ssh pgrep buildkitd                                                                                                             │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │                     │
	│ image          │ functional-487532 image build -t localhost/my-image:functional-487532 testdata/build --alsologtostderr                                            │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image          │ functional-487532 image ls --format json --alsologtostderr                                                                                        │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image          │ functional-487532 image ls --format table --alsologtostderr                                                                                       │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ update-context │ functional-487532 update-context --alsologtostderr -v=2                                                                                           │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ update-context │ functional-487532 update-context --alsologtostderr -v=2                                                                                           │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ update-context │ functional-487532 update-context --alsologtostderr -v=2                                                                                           │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image          │ functional-487532 image ls                                                                                                                        │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ delete         │ -p functional-487532                                                                                                                              │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:24 UTC │
	│ start          │ -p functional-364120 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:24 UTC │                     │
	│ start          │ -p functional-364120 --alsologtostderr -v=8                                                                                                       │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:32 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 06:32:21
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 06:32:21.945678 1633651 out.go:360] Setting OutFile to fd 1 ...
	I1216 06:32:21.945884 1633651 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:32:21.945913 1633651 out.go:374] Setting ErrFile to fd 2...
	I1216 06:32:21.945938 1633651 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:32:21.946236 1633651 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 06:32:21.946683 1633651 out.go:368] Setting JSON to false
	I1216 06:32:21.947701 1633651 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":33293,"bootTime":1765833449,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1216 06:32:21.947809 1633651 start.go:143] virtualization:  
	I1216 06:32:21.951426 1633651 out.go:179] * [functional-364120] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 06:32:21.955191 1633651 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 06:32:21.955256 1633651 notify.go:221] Checking for updates...
	I1216 06:32:21.958173 1633651 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 06:32:21.961154 1633651 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:32:21.964261 1633651 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube
	I1216 06:32:21.967271 1633651 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 06:32:21.970206 1633651 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 06:32:21.973784 1633651 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 06:32:21.973958 1633651 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 06:32:22.008677 1633651 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 06:32:22.008820 1633651 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:32:22.071471 1633651 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 06:32:22.061898568 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 06:32:22.071599 1633651 docker.go:319] overlay module found
	I1216 06:32:22.074586 1633651 out.go:179] * Using the docker driver based on existing profile
	I1216 06:32:22.077482 1633651 start.go:309] selected driver: docker
	I1216 06:32:22.077504 1633651 start.go:927] validating driver "docker" against &{Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:32:22.077607 1633651 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 06:32:22.077718 1633651 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:32:22.133247 1633651 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 06:32:22.124039104 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 06:32:22.133687 1633651 cni.go:84] Creating CNI manager for ""
	I1216 06:32:22.133753 1633651 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 06:32:22.133810 1633651 start.go:353] cluster config:
	{Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:32:22.136881 1633651 out.go:179] * Starting "functional-364120" primary control-plane node in "functional-364120" cluster
	I1216 06:32:22.139682 1633651 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 06:32:22.142506 1633651 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 06:32:22.145532 1633651 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 06:32:22.145589 1633651 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1216 06:32:22.145600 1633651 cache.go:65] Caching tarball of preloaded images
	I1216 06:32:22.145641 1633651 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 06:32:22.145690 1633651 preload.go:238] Found /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 06:32:22.145701 1633651 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1216 06:32:22.145813 1633651 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/config.json ...
	I1216 06:32:22.165180 1633651 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 06:32:22.165200 1633651 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 06:32:22.165222 1633651 cache.go:243] Successfully downloaded all kic artifacts
	I1216 06:32:22.165256 1633651 start.go:360] acquireMachinesLock for functional-364120: {Name:mkbf042218fd4d1baa11f8b1e4a71170f4ad9912 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:32:22.165333 1633651 start.go:364] duration metric: took 48.796µs to acquireMachinesLock for "functional-364120"
	I1216 06:32:22.165354 1633651 start.go:96] Skipping create...Using existing machine configuration
	I1216 06:32:22.165360 1633651 fix.go:54] fixHost starting: 
	I1216 06:32:22.165613 1633651 cli_runner.go:164] Run: docker container inspect functional-364120 --format={{.State.Status}}
	I1216 06:32:22.182587 1633651 fix.go:112] recreateIfNeeded on functional-364120: state=Running err=<nil>
	W1216 06:32:22.182616 1633651 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 06:32:22.185776 1633651 out.go:252] * Updating the running docker "functional-364120" container ...
	I1216 06:32:22.185814 1633651 machine.go:94] provisionDockerMachine start ...
	I1216 06:32:22.185896 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:22.204643 1633651 main.go:143] libmachine: Using SSH client type: native
	I1216 06:32:22.205060 1633651 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1216 06:32:22.205076 1633651 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 06:32:22.340733 1633651 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-364120
	
	I1216 06:32:22.340761 1633651 ubuntu.go:182] provisioning hostname "functional-364120"
	I1216 06:32:22.340833 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:22.359374 1633651 main.go:143] libmachine: Using SSH client type: native
	I1216 06:32:22.359683 1633651 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1216 06:32:22.359701 1633651 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-364120 && echo "functional-364120" | sudo tee /etc/hostname
	I1216 06:32:22.513698 1633651 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-364120
	
	I1216 06:32:22.513777 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:22.532110 1633651 main.go:143] libmachine: Using SSH client type: native
	I1216 06:32:22.532428 1633651 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1216 06:32:22.532445 1633651 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-364120' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-364120/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-364120' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 06:32:22.668828 1633651 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 06:32:22.668856 1633651 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-1596013/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-1596013/.minikube}
	I1216 06:32:22.668881 1633651 ubuntu.go:190] setting up certificates
	I1216 06:32:22.668900 1633651 provision.go:84] configureAuth start
	I1216 06:32:22.668975 1633651 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-364120
	I1216 06:32:22.686750 1633651 provision.go:143] copyHostCerts
	I1216 06:32:22.686794 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 06:32:22.686839 1633651 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem, removing ...
	I1216 06:32:22.686850 1633651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 06:32:22.686924 1633651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem (1675 bytes)
	I1216 06:32:22.687014 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 06:32:22.687038 1633651 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem, removing ...
	I1216 06:32:22.687049 1633651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 06:32:22.687078 1633651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem (1078 bytes)
	I1216 06:32:22.687125 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 06:32:22.687146 1633651 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem, removing ...
	I1216 06:32:22.687154 1633651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 06:32:22.687181 1633651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem (1123 bytes)
	I1216 06:32:22.687234 1633651 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem org=jenkins.functional-364120 san=[127.0.0.1 192.168.49.2 functional-364120 localhost minikube]
	I1216 06:32:22.948191 1633651 provision.go:177] copyRemoteCerts
	I1216 06:32:22.948261 1633651 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 06:32:22.948301 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:22.965164 1633651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:32:23.060207 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1216 06:32:23.060306 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 06:32:23.077647 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1216 06:32:23.077712 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 06:32:23.095215 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1216 06:32:23.095292 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 06:32:23.112813 1633651 provision.go:87] duration metric: took 443.895655ms to configureAuth
	I1216 06:32:23.112841 1633651 ubuntu.go:206] setting minikube options for container-runtime
	I1216 06:32:23.113039 1633651 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 06:32:23.113160 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:23.130832 1633651 main.go:143] libmachine: Using SSH client type: native
	I1216 06:32:23.131171 1633651 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1216 06:32:23.131200 1633651 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 06:32:23.456336 1633651 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 06:32:23.456407 1633651 machine.go:97] duration metric: took 1.270583728s to provisionDockerMachine
	I1216 06:32:23.456430 1633651 start.go:293] postStartSetup for "functional-364120" (driver="docker")
	I1216 06:32:23.456444 1633651 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 06:32:23.456549 1633651 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 06:32:23.456623 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:23.474584 1633651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:32:23.572573 1633651 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 06:32:23.576065 1633651 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1216 06:32:23.576089 1633651 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1216 06:32:23.576094 1633651 command_runner.go:130] > VERSION_ID="12"
	I1216 06:32:23.576099 1633651 command_runner.go:130] > VERSION="12 (bookworm)"
	I1216 06:32:23.576104 1633651 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1216 06:32:23.576107 1633651 command_runner.go:130] > ID=debian
	I1216 06:32:23.576111 1633651 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1216 06:32:23.576116 1633651 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1216 06:32:23.576121 1633651 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1216 06:32:23.576161 1633651 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 06:32:23.576184 1633651 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 06:32:23.576195 1633651 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/addons for local assets ...
	I1216 06:32:23.576257 1633651 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/files for local assets ...
	I1216 06:32:23.576334 1633651 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> 15992552.pem in /etc/ssl/certs
	I1216 06:32:23.576345 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> /etc/ssl/certs/15992552.pem
	I1216 06:32:23.576419 1633651 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/test/nested/copy/1599255/hosts -> hosts in /etc/test/nested/copy/1599255
	I1216 06:32:23.576428 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/test/nested/copy/1599255/hosts -> /etc/test/nested/copy/1599255/hosts
	I1216 06:32:23.576497 1633651 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1599255
	I1216 06:32:23.584272 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 06:32:23.602073 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/test/nested/copy/1599255/hosts --> /etc/test/nested/copy/1599255/hosts (40 bytes)
	I1216 06:32:23.620211 1633651 start.go:296] duration metric: took 163.749097ms for postStartSetup
	I1216 06:32:23.620332 1633651 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 06:32:23.620393 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:23.637607 1633651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:32:23.729817 1633651 command_runner.go:130] > 11%
	I1216 06:32:23.729920 1633651 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 06:32:23.734460 1633651 command_runner.go:130] > 173G
	I1216 06:32:23.734888 1633651 fix.go:56] duration metric: took 1.569523929s for fixHost
	I1216 06:32:23.734910 1633651 start.go:83] releasing machines lock for "functional-364120", held for 1.569567934s
	I1216 06:32:23.734992 1633651 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-364120
	I1216 06:32:23.753392 1633651 ssh_runner.go:195] Run: cat /version.json
	I1216 06:32:23.753419 1633651 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 06:32:23.753445 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:23.753482 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:23.775365 1633651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:32:23.776190 1633651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:32:23.872489 1633651 command_runner.go:130] > {"iso_version": "v1.37.0-1765579389-22117", "kicbase_version": "v0.0.48-1765661130-22141", "minikube_version": "v1.37.0", "commit": "cbb33128a244032d08f8fc6e6c9f03b30f0da3e4"}
	I1216 06:32:23.964085 1633651 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1216 06:32:23.966949 1633651 ssh_runner.go:195] Run: systemctl --version
	I1216 06:32:23.972881 1633651 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1216 06:32:23.972927 1633651 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1216 06:32:23.973332 1633651 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 06:32:24.017041 1633651 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1216 06:32:24.021688 1633651 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1216 06:32:24.021875 1633651 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 06:32:24.021943 1633651 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 06:32:24.030849 1633651 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 06:32:24.030874 1633651 start.go:496] detecting cgroup driver to use...
	I1216 06:32:24.030909 1633651 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:32:24.030973 1633651 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 06:32:24.046872 1633651 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:32:24.060299 1633651 docker.go:218] disabling cri-docker service (if available) ...
	I1216 06:32:24.060392 1633651 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 06:32:24.076826 1633651 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 06:32:24.090325 1633651 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 06:32:24.210022 1633651 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 06:32:24.329836 1633651 docker.go:234] disabling docker service ...
	I1216 06:32:24.329935 1633651 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 06:32:24.345813 1633651 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 06:32:24.359799 1633651 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 06:32:24.482084 1633651 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 06:32:24.592216 1633651 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 06:32:24.607323 1633651 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 06:32:24.620059 1633651 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1216 06:32:24.621570 1633651 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 06:32:24.621685 1633651 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:32:24.630471 1633651 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 06:32:24.630583 1633651 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:32:24.638917 1633651 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:32:24.647722 1633651 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:32:24.656274 1633651 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 06:32:24.664335 1633651 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:32:24.674249 1633651 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:32:24.682423 1633651 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:32:24.691805 1633651 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 06:32:24.699096 1633651 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1216 06:32:24.700134 1633651 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 06:32:24.707996 1633651 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:32:24.828004 1633651 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 06:32:24.995020 1633651 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 06:32:24.995147 1633651 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 06:32:24.998673 1633651 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1216 06:32:24.998710 1633651 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1216 06:32:24.998717 1633651 command_runner.go:130] > Device: 0,73	Inode: 1638        Links: 1
	I1216 06:32:24.998724 1633651 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1216 06:32:24.998732 1633651 command_runner.go:130] > Access: 2025-12-16 06:32:24.929681899 +0000
	I1216 06:32:24.998737 1633651 command_runner.go:130] > Modify: 2025-12-16 06:32:24.929681899 +0000
	I1216 06:32:24.998743 1633651 command_runner.go:130] > Change: 2025-12-16 06:32:24.929681899 +0000
	I1216 06:32:24.998747 1633651 command_runner.go:130] >  Birth: -
	I1216 06:32:24.999054 1633651 start.go:564] Will wait 60s for crictl version
	I1216 06:32:24.999171 1633651 ssh_runner.go:195] Run: which crictl
	I1216 06:32:25.003803 1633651 command_runner.go:130] > /usr/local/bin/crictl
	I1216 06:32:25.003920 1633651 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 06:32:25.030365 1633651 command_runner.go:130] > Version:  0.1.0
	I1216 06:32:25.030401 1633651 command_runner.go:130] > RuntimeName:  cri-o
	I1216 06:32:25.030407 1633651 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1216 06:32:25.030415 1633651 command_runner.go:130] > RuntimeApiVersion:  v1
	I1216 06:32:25.032653 1633651 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 06:32:25.032766 1633651 ssh_runner.go:195] Run: crio --version
	I1216 06:32:25.062220 1633651 command_runner.go:130] > crio version 1.34.3
	I1216 06:32:25.062244 1633651 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1216 06:32:25.062252 1633651 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1216 06:32:25.062258 1633651 command_runner.go:130] >    GitTreeState:   dirty
	I1216 06:32:25.062271 1633651 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1216 06:32:25.062277 1633651 command_runner.go:130] >    GoVersion:      go1.24.6
	I1216 06:32:25.062281 1633651 command_runner.go:130] >    Compiler:       gc
	I1216 06:32:25.062287 1633651 command_runner.go:130] >    Platform:       linux/arm64
	I1216 06:32:25.062295 1633651 command_runner.go:130] >    Linkmode:       static
	I1216 06:32:25.062298 1633651 command_runner.go:130] >    BuildTags:
	I1216 06:32:25.062306 1633651 command_runner.go:130] >      static
	I1216 06:32:25.062310 1633651 command_runner.go:130] >      netgo
	I1216 06:32:25.062314 1633651 command_runner.go:130] >      osusergo
	I1216 06:32:25.062318 1633651 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1216 06:32:25.062324 1633651 command_runner.go:130] >      seccomp
	I1216 06:32:25.062328 1633651 command_runner.go:130] >      apparmor
	I1216 06:32:25.062335 1633651 command_runner.go:130] >      selinux
	I1216 06:32:25.062355 1633651 command_runner.go:130] >    LDFlags:          unknown
	I1216 06:32:25.062366 1633651 command_runner.go:130] >    SeccompEnabled:   true
	I1216 06:32:25.062371 1633651 command_runner.go:130] >    AppArmorEnabled:  false
	I1216 06:32:25.062783 1633651 ssh_runner.go:195] Run: crio --version
	I1216 06:32:25.091083 1633651 command_runner.go:130] > crio version 1.34.3
	I1216 06:32:25.091135 1633651 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1216 06:32:25.091142 1633651 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1216 06:32:25.091169 1633651 command_runner.go:130] >    GitTreeState:   dirty
	I1216 06:32:25.091182 1633651 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1216 06:32:25.091188 1633651 command_runner.go:130] >    GoVersion:      go1.24.6
	I1216 06:32:25.091193 1633651 command_runner.go:130] >    Compiler:       gc
	I1216 06:32:25.091205 1633651 command_runner.go:130] >    Platform:       linux/arm64
	I1216 06:32:25.091210 1633651 command_runner.go:130] >    Linkmode:       static
	I1216 06:32:25.091218 1633651 command_runner.go:130] >    BuildTags:
	I1216 06:32:25.091223 1633651 command_runner.go:130] >      static
	I1216 06:32:25.091226 1633651 command_runner.go:130] >      netgo
	I1216 06:32:25.091230 1633651 command_runner.go:130] >      osusergo
	I1216 06:32:25.091244 1633651 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1216 06:32:25.091254 1633651 command_runner.go:130] >      seccomp
	I1216 06:32:25.091262 1633651 command_runner.go:130] >      apparmor
	I1216 06:32:25.091274 1633651 command_runner.go:130] >      selinux
	I1216 06:32:25.091278 1633651 command_runner.go:130] >    LDFlags:          unknown
	I1216 06:32:25.091282 1633651 command_runner.go:130] >    SeccompEnabled:   true
	I1216 06:32:25.091286 1633651 command_runner.go:130] >    AppArmorEnabled:  false
	I1216 06:32:25.097058 1633651 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1216 06:32:25.100055 1633651 cli_runner.go:164] Run: docker network inspect functional-364120 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 06:32:25.116990 1633651 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 06:32:25.121062 1633651 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1216 06:32:25.121217 1633651 kubeadm.go:884] updating cluster {Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 06:32:25.121338 1633651 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 06:32:25.121400 1633651 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 06:32:25.161132 1633651 command_runner.go:130] > {
	I1216 06:32:25.161156 1633651 command_runner.go:130] >   "images":  [
	I1216 06:32:25.161162 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161171 1633651 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1216 06:32:25.161176 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161183 1633651 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1216 06:32:25.161197 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161202 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161212 1633651 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1216 06:32:25.161220 1633651 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1216 06:32:25.161224 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161229 1633651 command_runner.go:130] >       "size":  "111333938",
	I1216 06:32:25.161237 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.161245 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.161248 1633651 command_runner.go:130] >     },
	I1216 06:32:25.161253 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161267 1633651 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1216 06:32:25.161272 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161278 1633651 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1216 06:32:25.161289 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161295 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161303 1633651 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1216 06:32:25.161313 1633651 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1216 06:32:25.161317 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161325 1633651 command_runner.go:130] >       "size":  "29037500",
	I1216 06:32:25.161333 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.161342 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.161350 1633651 command_runner.go:130] >     },
	I1216 06:32:25.161353 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161360 1633651 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1216 06:32:25.161368 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161373 1633651 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1216 06:32:25.161376 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161380 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161388 1633651 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1216 06:32:25.161400 1633651 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1216 06:32:25.161403 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161408 1633651 command_runner.go:130] >       "size":  "74491780",
	I1216 06:32:25.161415 1633651 command_runner.go:130] >       "username":  "nonroot",
	I1216 06:32:25.161424 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.161431 1633651 command_runner.go:130] >     },
	I1216 06:32:25.161435 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161442 1633651 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1216 06:32:25.161450 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161456 1633651 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1216 06:32:25.161459 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161469 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161477 1633651 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1216 06:32:25.161485 1633651 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1216 06:32:25.161489 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161493 1633651 command_runner.go:130] >       "size":  "60857170",
	I1216 06:32:25.161499 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.161511 1633651 command_runner.go:130] >         "value":  "0"
	I1216 06:32:25.161514 1633651 command_runner.go:130] >       },
	I1216 06:32:25.161529 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.161540 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.161544 1633651 command_runner.go:130] >     },
	I1216 06:32:25.161554 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161567 1633651 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1216 06:32:25.161571 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161578 1633651 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1216 06:32:25.161582 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161588 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161601 1633651 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1216 06:32:25.161614 1633651 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1216 06:32:25.161618 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161623 1633651 command_runner.go:130] >       "size":  "84949999",
	I1216 06:32:25.161631 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.161636 1633651 command_runner.go:130] >         "value":  "0"
	I1216 06:32:25.161639 1633651 command_runner.go:130] >       },
	I1216 06:32:25.161643 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.161647 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.161667 1633651 command_runner.go:130] >     },
	I1216 06:32:25.161675 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161682 1633651 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1216 06:32:25.161686 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161692 1633651 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1216 06:32:25.161701 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161705 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161714 1633651 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1216 06:32:25.161726 1633651 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1216 06:32:25.161730 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161734 1633651 command_runner.go:130] >       "size":  "72170325",
	I1216 06:32:25.161738 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.161743 1633651 command_runner.go:130] >         "value":  "0"
	I1216 06:32:25.161748 1633651 command_runner.go:130] >       },
	I1216 06:32:25.161753 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.161758 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.161761 1633651 command_runner.go:130] >     },
	I1216 06:32:25.161764 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161771 1633651 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1216 06:32:25.161779 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161785 1633651 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1216 06:32:25.161788 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161793 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161801 1633651 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1216 06:32:25.161814 1633651 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1216 06:32:25.161818 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161822 1633651 command_runner.go:130] >       "size":  "74106775",
	I1216 06:32:25.161826 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.161830 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.161836 1633651 command_runner.go:130] >     },
	I1216 06:32:25.161839 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161846 1633651 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1216 06:32:25.161850 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161863 1633651 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1216 06:32:25.161870 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161874 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161882 1633651 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1216 06:32:25.161905 1633651 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1216 06:32:25.161913 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161918 1633651 command_runner.go:130] >       "size":  "49822549",
	I1216 06:32:25.161921 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.161925 1633651 command_runner.go:130] >         "value":  "0"
	I1216 06:32:25.161929 1633651 command_runner.go:130] >       },
	I1216 06:32:25.161933 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.161937 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.161943 1633651 command_runner.go:130] >     },
	I1216 06:32:25.161947 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161956 1633651 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1216 06:32:25.161960 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161965 1633651 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1216 06:32:25.161971 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161975 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161995 1633651 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1216 06:32:25.162003 1633651 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1216 06:32:25.162006 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.162010 1633651 command_runner.go:130] >       "size":  "519884",
	I1216 06:32:25.162013 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.162017 1633651 command_runner.go:130] >         "value":  "65535"
	I1216 06:32:25.162020 1633651 command_runner.go:130] >       },
	I1216 06:32:25.162029 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.162036 1633651 command_runner.go:130] >       "pinned":  true
	I1216 06:32:25.162040 1633651 command_runner.go:130] >     }
	I1216 06:32:25.162043 1633651 command_runner.go:130] >   ]
	I1216 06:32:25.162046 1633651 command_runner.go:130] > }
	I1216 06:32:25.162230 1633651 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 06:32:25.162244 1633651 crio.go:433] Images already preloaded, skipping extraction
	I1216 06:32:25.162311 1633651 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 06:32:25.189040 1633651 command_runner.go:130] > {
	I1216 06:32:25.189061 1633651 command_runner.go:130] >   "images":  [
	I1216 06:32:25.189066 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189085 1633651 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1216 06:32:25.189090 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189096 1633651 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1216 06:32:25.189100 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189103 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189112 1633651 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1216 06:32:25.189120 1633651 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1216 06:32:25.189125 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189133 1633651 command_runner.go:130] >       "size":  "111333938",
	I1216 06:32:25.189141 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.189146 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.189157 1633651 command_runner.go:130] >     },
	I1216 06:32:25.189161 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189168 1633651 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1216 06:32:25.189171 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189177 1633651 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1216 06:32:25.189180 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189184 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189193 1633651 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1216 06:32:25.189201 1633651 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1216 06:32:25.189204 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189208 1633651 command_runner.go:130] >       "size":  "29037500",
	I1216 06:32:25.189212 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.189217 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.189220 1633651 command_runner.go:130] >     },
	I1216 06:32:25.189223 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189230 1633651 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1216 06:32:25.189233 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189239 1633651 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1216 06:32:25.189242 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189246 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189255 1633651 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1216 06:32:25.189263 1633651 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1216 06:32:25.189266 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189270 1633651 command_runner.go:130] >       "size":  "74491780",
	I1216 06:32:25.189274 1633651 command_runner.go:130] >       "username":  "nonroot",
	I1216 06:32:25.189278 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.189281 1633651 command_runner.go:130] >     },
	I1216 06:32:25.189284 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189291 1633651 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1216 06:32:25.189295 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189300 1633651 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1216 06:32:25.189309 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189313 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189322 1633651 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1216 06:32:25.189330 1633651 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1216 06:32:25.189333 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189337 1633651 command_runner.go:130] >       "size":  "60857170",
	I1216 06:32:25.189341 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.189345 1633651 command_runner.go:130] >         "value":  "0"
	I1216 06:32:25.189348 1633651 command_runner.go:130] >       },
	I1216 06:32:25.189357 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.189361 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.189364 1633651 command_runner.go:130] >     },
	I1216 06:32:25.189367 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189375 1633651 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1216 06:32:25.189378 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189384 1633651 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1216 06:32:25.189387 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189391 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189399 1633651 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1216 06:32:25.189407 1633651 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1216 06:32:25.189411 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189420 1633651 command_runner.go:130] >       "size":  "84949999",
	I1216 06:32:25.189423 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.189427 1633651 command_runner.go:130] >         "value":  "0"
	I1216 06:32:25.189431 1633651 command_runner.go:130] >       },
	I1216 06:32:25.189435 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.189439 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.189444 1633651 command_runner.go:130] >     },
	I1216 06:32:25.189453 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189460 1633651 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1216 06:32:25.189464 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189469 1633651 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1216 06:32:25.189473 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189486 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189495 1633651 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1216 06:32:25.189505 1633651 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1216 06:32:25.189508 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189513 1633651 command_runner.go:130] >       "size":  "72170325",
	I1216 06:32:25.189516 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.189524 1633651 command_runner.go:130] >         "value":  "0"
	I1216 06:32:25.189527 1633651 command_runner.go:130] >       },
	I1216 06:32:25.189531 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.189536 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.189539 1633651 command_runner.go:130] >     },
	I1216 06:32:25.189542 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189549 1633651 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1216 06:32:25.189553 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189558 1633651 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1216 06:32:25.189561 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189564 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189572 1633651 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1216 06:32:25.189580 1633651 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1216 06:32:25.189583 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189587 1633651 command_runner.go:130] >       "size":  "74106775",
	I1216 06:32:25.189591 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.189595 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.189597 1633651 command_runner.go:130] >     },
	I1216 06:32:25.189600 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189607 1633651 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1216 06:32:25.189611 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189616 1633651 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1216 06:32:25.189620 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189623 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189631 1633651 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1216 06:32:25.189649 1633651 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1216 06:32:25.189653 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189660 1633651 command_runner.go:130] >       "size":  "49822549",
	I1216 06:32:25.189664 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.189668 1633651 command_runner.go:130] >         "value":  "0"
	I1216 06:32:25.189671 1633651 command_runner.go:130] >       },
	I1216 06:32:25.189675 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.189679 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.189682 1633651 command_runner.go:130] >     },
	I1216 06:32:25.189685 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189691 1633651 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1216 06:32:25.189695 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189700 1633651 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1216 06:32:25.189703 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189707 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189714 1633651 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1216 06:32:25.189722 1633651 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1216 06:32:25.189725 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189729 1633651 command_runner.go:130] >       "size":  "519884",
	I1216 06:32:25.189732 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.189736 1633651 command_runner.go:130] >         "value":  "65535"
	I1216 06:32:25.189740 1633651 command_runner.go:130] >       },
	I1216 06:32:25.189744 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.189748 1633651 command_runner.go:130] >       "pinned":  true
	I1216 06:32:25.189751 1633651 command_runner.go:130] >     }
	I1216 06:32:25.189754 1633651 command_runner.go:130] >   ]
	I1216 06:32:25.189758 1633651 command_runner.go:130] > }
	I1216 06:32:25.192082 1633651 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 06:32:25.192103 1633651 cache_images.go:86] Images are preloaded, skipping loading
	I1216 06:32:25.192110 1633651 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1216 06:32:25.192213 1633651 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-364120 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 06:32:25.192293 1633651 ssh_runner.go:195] Run: crio config
	I1216 06:32:25.241430 1633651 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1216 06:32:25.241454 1633651 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1216 06:32:25.241463 1633651 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1216 06:32:25.241467 1633651 command_runner.go:130] > #
	I1216 06:32:25.241474 1633651 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1216 06:32:25.241481 1633651 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1216 06:32:25.241487 1633651 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1216 06:32:25.241503 1633651 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1216 06:32:25.241507 1633651 command_runner.go:130] > # reload'.
	I1216 06:32:25.241513 1633651 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1216 06:32:25.241520 1633651 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1216 06:32:25.241526 1633651 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1216 06:32:25.241533 1633651 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1216 06:32:25.241546 1633651 command_runner.go:130] > [crio]
	I1216 06:32:25.241552 1633651 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1216 06:32:25.241558 1633651 command_runner.go:130] > # containers images, in this directory.
	I1216 06:32:25.242467 1633651 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1216 06:32:25.242525 1633651 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1216 06:32:25.243204 1633651 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1216 06:32:25.243220 1633651 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1216 06:32:25.243745 1633651 command_runner.go:130] > # imagestore = ""
	I1216 06:32:25.243759 1633651 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1216 06:32:25.243765 1633651 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1216 06:32:25.244384 1633651 command_runner.go:130] > # storage_driver = "overlay"
	I1216 06:32:25.244405 1633651 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1216 06:32:25.244412 1633651 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1216 06:32:25.244775 1633651 command_runner.go:130] > # storage_option = [
	I1216 06:32:25.245138 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.245151 1633651 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1216 06:32:25.245190 1633651 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1216 06:32:25.245804 1633651 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1216 06:32:25.245817 1633651 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1216 06:32:25.245829 1633651 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1216 06:32:25.245834 1633651 command_runner.go:130] > # always happen on a node reboot
	I1216 06:32:25.246485 1633651 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1216 06:32:25.246511 1633651 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1216 06:32:25.246534 1633651 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1216 06:32:25.246545 1633651 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1216 06:32:25.247059 1633651 command_runner.go:130] > # version_file_persist = ""
	I1216 06:32:25.247081 1633651 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1216 06:32:25.247091 1633651 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1216 06:32:25.247784 1633651 command_runner.go:130] > # internal_wipe = true
	I1216 06:32:25.247805 1633651 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1216 06:32:25.247812 1633651 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1216 06:32:25.248459 1633651 command_runner.go:130] > # internal_repair = true
	I1216 06:32:25.248493 1633651 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1216 06:32:25.248501 1633651 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1216 06:32:25.248507 1633651 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1216 06:32:25.249140 1633651 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1216 06:32:25.249157 1633651 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1216 06:32:25.249161 1633651 command_runner.go:130] > [crio.api]
	I1216 06:32:25.249167 1633651 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1216 06:32:25.251400 1633651 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1216 06:32:25.251419 1633651 command_runner.go:130] > # IP address on which the stream server will listen.
	I1216 06:32:25.251426 1633651 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1216 06:32:25.251453 1633651 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1216 06:32:25.251465 1633651 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1216 06:32:25.251470 1633651 command_runner.go:130] > # stream_port = "0"
	I1216 06:32:25.251476 1633651 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1216 06:32:25.251480 1633651 command_runner.go:130] > # stream_enable_tls = false
	I1216 06:32:25.251487 1633651 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1216 06:32:25.251494 1633651 command_runner.go:130] > # stream_idle_timeout = ""
	I1216 06:32:25.251501 1633651 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1216 06:32:25.251510 1633651 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1216 06:32:25.251527 1633651 command_runner.go:130] > # stream_tls_cert = ""
	I1216 06:32:25.251540 1633651 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1216 06:32:25.251546 1633651 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1216 06:32:25.251563 1633651 command_runner.go:130] > # stream_tls_key = ""
	I1216 06:32:25.251575 1633651 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1216 06:32:25.251585 1633651 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1216 06:32:25.251591 1633651 command_runner.go:130] > # automatically pick up the changes.
	I1216 06:32:25.251603 1633651 command_runner.go:130] > # stream_tls_ca = ""
	I1216 06:32:25.251622 1633651 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1216 06:32:25.251658 1633651 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1216 06:32:25.251672 1633651 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1216 06:32:25.251677 1633651 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1216 06:32:25.251692 1633651 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1216 06:32:25.251703 1633651 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1216 06:32:25.251707 1633651 command_runner.go:130] > [crio.runtime]
	I1216 06:32:25.251713 1633651 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1216 06:32:25.251719 1633651 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1216 06:32:25.251735 1633651 command_runner.go:130] > # "nofile=1024:2048"
	I1216 06:32:25.251746 1633651 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1216 06:32:25.251751 1633651 command_runner.go:130] > # default_ulimits = [
	I1216 06:32:25.251754 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.251760 1633651 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1216 06:32:25.251767 1633651 command_runner.go:130] > # no_pivot = false
	I1216 06:32:25.251773 1633651 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1216 06:32:25.251779 1633651 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1216 06:32:25.251788 1633651 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1216 06:32:25.251794 1633651 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1216 06:32:25.251799 1633651 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1216 06:32:25.251815 1633651 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1216 06:32:25.251827 1633651 command_runner.go:130] > # conmon = ""
	I1216 06:32:25.251832 1633651 command_runner.go:130] > # Cgroup setting for conmon
	I1216 06:32:25.251838 1633651 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1216 06:32:25.251853 1633651 command_runner.go:130] > conmon_cgroup = "pod"
	I1216 06:32:25.251866 1633651 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1216 06:32:25.251872 1633651 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1216 06:32:25.251879 1633651 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1216 06:32:25.251884 1633651 command_runner.go:130] > # conmon_env = [
	I1216 06:32:25.251887 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.251893 1633651 command_runner.go:130] > # Additional environment variables to set for all the
	I1216 06:32:25.251898 1633651 command_runner.go:130] > # containers. These are overridden if set in the
	I1216 06:32:25.251906 1633651 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1216 06:32:25.251910 1633651 command_runner.go:130] > # default_env = [
	I1216 06:32:25.251931 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.251956 1633651 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1216 06:32:25.251970 1633651 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1216 06:32:25.251982 1633651 command_runner.go:130] > # selinux = false
	I1216 06:32:25.251995 1633651 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1216 06:32:25.252003 1633651 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1216 06:32:25.252037 1633651 command_runner.go:130] > # This option supports live configuration reload.
	I1216 06:32:25.252047 1633651 command_runner.go:130] > # seccomp_profile = ""
	I1216 06:32:25.252055 1633651 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1216 06:32:25.252060 1633651 command_runner.go:130] > # This option supports live configuration reload.
	I1216 06:32:25.252066 1633651 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1216 06:32:25.252073 1633651 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1216 06:32:25.252082 1633651 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1216 06:32:25.252088 1633651 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1216 06:32:25.252097 1633651 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1216 06:32:25.252125 1633651 command_runner.go:130] > # This option supports live configuration reload.
	I1216 06:32:25.252136 1633651 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1216 06:32:25.252147 1633651 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1216 06:32:25.252161 1633651 command_runner.go:130] > # the cgroup blockio controller.
	I1216 06:32:25.252165 1633651 command_runner.go:130] > # blockio_config_file = ""
	I1216 06:32:25.252172 1633651 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1216 06:32:25.252176 1633651 command_runner.go:130] > # blockio parameters.
	I1216 06:32:25.252182 1633651 command_runner.go:130] > # blockio_reload = false
	I1216 06:32:25.252207 1633651 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1216 06:32:25.252224 1633651 command_runner.go:130] > # irqbalance daemon.
	I1216 06:32:25.252230 1633651 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1216 06:32:25.252251 1633651 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1216 06:32:25.252260 1633651 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1216 06:32:25.252270 1633651 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1216 06:32:25.252276 1633651 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1216 06:32:25.252283 1633651 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1216 06:32:25.252291 1633651 command_runner.go:130] > # This option supports live configuration reload.
	I1216 06:32:25.252295 1633651 command_runner.go:130] > # rdt_config_file = ""
	I1216 06:32:25.252300 1633651 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1216 06:32:25.252305 1633651 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1216 06:32:25.252321 1633651 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1216 06:32:25.252339 1633651 command_runner.go:130] > # separate_pull_cgroup = ""
	I1216 06:32:25.252356 1633651 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1216 06:32:25.252372 1633651 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1216 06:32:25.252380 1633651 command_runner.go:130] > # will be added.
	I1216 06:32:25.252385 1633651 command_runner.go:130] > # default_capabilities = [
	I1216 06:32:25.252388 1633651 command_runner.go:130] > # 	"CHOWN",
	I1216 06:32:25.252392 1633651 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1216 06:32:25.252405 1633651 command_runner.go:130] > # 	"FSETID",
	I1216 06:32:25.252411 1633651 command_runner.go:130] > # 	"FOWNER",
	I1216 06:32:25.252415 1633651 command_runner.go:130] > # 	"SETGID",
	I1216 06:32:25.252431 1633651 command_runner.go:130] > # 	"SETUID",
	I1216 06:32:25.252493 1633651 command_runner.go:130] > # 	"SETPCAP",
	I1216 06:32:25.252505 1633651 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1216 06:32:25.252509 1633651 command_runner.go:130] > # 	"KILL",
	I1216 06:32:25.252512 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.252520 1633651 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1216 06:32:25.252530 1633651 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1216 06:32:25.252534 1633651 command_runner.go:130] > # add_inheritable_capabilities = false
	I1216 06:32:25.252541 1633651 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1216 06:32:25.252547 1633651 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1216 06:32:25.252564 1633651 command_runner.go:130] > default_sysctls = [
	I1216 06:32:25.252577 1633651 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1216 06:32:25.252581 1633651 command_runner.go:130] > ]
	I1216 06:32:25.252587 1633651 command_runner.go:130] > # List of devices on the host that a
	I1216 06:32:25.252597 1633651 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1216 06:32:25.252601 1633651 command_runner.go:130] > # allowed_devices = [
	I1216 06:32:25.252605 1633651 command_runner.go:130] > # 	"/dev/fuse",
	I1216 06:32:25.252610 1633651 command_runner.go:130] > # 	"/dev/net/tun",
	I1216 06:32:25.252613 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.252624 1633651 command_runner.go:130] > # List of additional devices. specified as
	I1216 06:32:25.252649 1633651 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1216 06:32:25.252661 1633651 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1216 06:32:25.252667 1633651 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1216 06:32:25.252677 1633651 command_runner.go:130] > # additional_devices = [
	I1216 06:32:25.252685 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.252691 1633651 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1216 06:32:25.252703 1633651 command_runner.go:130] > # cdi_spec_dirs = [
	I1216 06:32:25.252716 1633651 command_runner.go:130] > # 	"/etc/cdi",
	I1216 06:32:25.252739 1633651 command_runner.go:130] > # 	"/var/run/cdi",
	I1216 06:32:25.252743 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.252750 1633651 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1216 06:32:25.252759 1633651 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1216 06:32:25.252769 1633651 command_runner.go:130] > # Defaults to false.
	I1216 06:32:25.252779 1633651 command_runner.go:130] > # device_ownership_from_security_context = false
	I1216 06:32:25.252786 1633651 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1216 06:32:25.252792 1633651 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1216 06:32:25.252807 1633651 command_runner.go:130] > # hooks_dir = [
	I1216 06:32:25.252819 1633651 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1216 06:32:25.252823 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.252829 1633651 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1216 06:32:25.252851 1633651 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1216 06:32:25.252857 1633651 command_runner.go:130] > # its default mounts from the following two files:
	I1216 06:32:25.252863 1633651 command_runner.go:130] > #
	I1216 06:32:25.252870 1633651 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1216 06:32:25.252876 1633651 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1216 06:32:25.252882 1633651 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1216 06:32:25.252886 1633651 command_runner.go:130] > #
	I1216 06:32:25.252893 1633651 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1216 06:32:25.252917 1633651 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1216 06:32:25.252940 1633651 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1216 06:32:25.252947 1633651 command_runner.go:130] > #      only add mounts it finds in this file.
	I1216 06:32:25.252950 1633651 command_runner.go:130] > #
	I1216 06:32:25.252955 1633651 command_runner.go:130] > # default_mounts_file = ""
	I1216 06:32:25.252963 1633651 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1216 06:32:25.252970 1633651 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1216 06:32:25.252977 1633651 command_runner.go:130] > # pids_limit = -1
	I1216 06:32:25.252989 1633651 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1216 06:32:25.253005 1633651 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1216 06:32:25.253018 1633651 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1216 06:32:25.253043 1633651 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1216 06:32:25.253055 1633651 command_runner.go:130] > # log_size_max = -1
	I1216 06:32:25.253064 1633651 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1216 06:32:25.253068 1633651 command_runner.go:130] > # log_to_journald = false
	I1216 06:32:25.253080 1633651 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1216 06:32:25.253090 1633651 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1216 06:32:25.253096 1633651 command_runner.go:130] > # Path to directory for container attach sockets.
	I1216 06:32:25.253101 1633651 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1216 06:32:25.253123 1633651 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1216 06:32:25.253128 1633651 command_runner.go:130] > # bind_mount_prefix = ""
	I1216 06:32:25.253151 1633651 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1216 06:32:25.253157 1633651 command_runner.go:130] > # read_only = false
	I1216 06:32:25.253169 1633651 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1216 06:32:25.253183 1633651 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1216 06:32:25.253188 1633651 command_runner.go:130] > # live configuration reload.
	I1216 06:32:25.253196 1633651 command_runner.go:130] > # log_level = "info"
	I1216 06:32:25.253219 1633651 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1216 06:32:25.253232 1633651 command_runner.go:130] > # This option supports live configuration reload.
	I1216 06:32:25.253236 1633651 command_runner.go:130] > # log_filter = ""
	I1216 06:32:25.253252 1633651 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1216 06:32:25.253264 1633651 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1216 06:32:25.253273 1633651 command_runner.go:130] > # separated by comma.
	I1216 06:32:25.253281 1633651 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1216 06:32:25.253287 1633651 command_runner.go:130] > # uid_mappings = ""
	I1216 06:32:25.253293 1633651 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1216 06:32:25.253300 1633651 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1216 06:32:25.253311 1633651 command_runner.go:130] > # separated by comma.
	I1216 06:32:25.253328 1633651 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1216 06:32:25.253340 1633651 command_runner.go:130] > # gid_mappings = ""
	I1216 06:32:25.253346 1633651 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1216 06:32:25.253362 1633651 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1216 06:32:25.253369 1633651 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1216 06:32:25.253377 1633651 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1216 06:32:25.253385 1633651 command_runner.go:130] > # minimum_mappable_uid = -1
	I1216 06:32:25.253391 1633651 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1216 06:32:25.253408 1633651 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1216 06:32:25.253421 1633651 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1216 06:32:25.253438 1633651 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1216 06:32:25.253448 1633651 command_runner.go:130] > # minimum_mappable_gid = -1
	I1216 06:32:25.253459 1633651 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1216 06:32:25.253468 1633651 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1216 06:32:25.253475 1633651 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1216 06:32:25.253481 1633651 command_runner.go:130] > # ctr_stop_timeout = 30
	I1216 06:32:25.253487 1633651 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1216 06:32:25.253493 1633651 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1216 06:32:25.253518 1633651 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1216 06:32:25.253530 1633651 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1216 06:32:25.253541 1633651 command_runner.go:130] > # drop_infra_ctr = true
	I1216 06:32:25.253557 1633651 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1216 06:32:25.253566 1633651 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1216 06:32:25.253573 1633651 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1216 06:32:25.253581 1633651 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1216 06:32:25.253607 1633651 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1216 06:32:25.253614 1633651 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1216 06:32:25.253630 1633651 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1216 06:32:25.253643 1633651 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1216 06:32:25.253647 1633651 command_runner.go:130] > # shared_cpuset = ""
	I1216 06:32:25.253653 1633651 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1216 06:32:25.253666 1633651 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1216 06:32:25.253670 1633651 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1216 06:32:25.253681 1633651 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1216 06:32:25.253688 1633651 command_runner.go:130] > # pinns_path = ""
	I1216 06:32:25.253694 1633651 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1216 06:32:25.253718 1633651 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1216 06:32:25.253731 1633651 command_runner.go:130] > # enable_criu_support = true
	I1216 06:32:25.253736 1633651 command_runner.go:130] > # Enable/disable the generation of the container,
	I1216 06:32:25.253754 1633651 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1216 06:32:25.253764 1633651 command_runner.go:130] > # enable_pod_events = false
	I1216 06:32:25.253771 1633651 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1216 06:32:25.253776 1633651 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1216 06:32:25.253786 1633651 command_runner.go:130] > # default_runtime = "crun"
	I1216 06:32:25.253795 1633651 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1216 06:32:25.253803 1633651 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1216 06:32:25.253814 1633651 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1216 06:32:25.253835 1633651 command_runner.go:130] > # creation as a file is not desired either.
	I1216 06:32:25.253853 1633651 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1216 06:32:25.253868 1633651 command_runner.go:130] > # the hostname is being managed dynamically.
	I1216 06:32:25.253876 1633651 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1216 06:32:25.253879 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.253885 1633651 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1216 06:32:25.253891 1633651 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1216 06:32:25.253923 1633651 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1216 06:32:25.253938 1633651 command_runner.go:130] > # Each entry in the table should follow the format:
	I1216 06:32:25.253941 1633651 command_runner.go:130] > #
	I1216 06:32:25.253946 1633651 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1216 06:32:25.253955 1633651 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1216 06:32:25.253959 1633651 command_runner.go:130] > # runtime_type = "oci"
	I1216 06:32:25.253977 1633651 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1216 06:32:25.253987 1633651 command_runner.go:130] > # inherit_default_runtime = false
	I1216 06:32:25.254007 1633651 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1216 06:32:25.254012 1633651 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1216 06:32:25.254016 1633651 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1216 06:32:25.254020 1633651 command_runner.go:130] > # monitor_env = []
	I1216 06:32:25.254034 1633651 command_runner.go:130] > # privileged_without_host_devices = false
	I1216 06:32:25.254044 1633651 command_runner.go:130] > # allowed_annotations = []
	I1216 06:32:25.254060 1633651 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1216 06:32:25.254072 1633651 command_runner.go:130] > # no_sync_log = false
	I1216 06:32:25.254076 1633651 command_runner.go:130] > # default_annotations = {}
	I1216 06:32:25.254081 1633651 command_runner.go:130] > # stream_websockets = false
	I1216 06:32:25.254088 1633651 command_runner.go:130] > # seccomp_profile = ""
	I1216 06:32:25.254142 1633651 command_runner.go:130] > # Where:
	I1216 06:32:25.254155 1633651 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1216 06:32:25.254162 1633651 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1216 06:32:25.254179 1633651 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1216 06:32:25.254193 1633651 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1216 06:32:25.254197 1633651 command_runner.go:130] > #   in $PATH.
	I1216 06:32:25.254203 1633651 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1216 06:32:25.254216 1633651 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1216 06:32:25.254223 1633651 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1216 06:32:25.254226 1633651 command_runner.go:130] > #   state.
	I1216 06:32:25.254232 1633651 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1216 06:32:25.254254 1633651 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1216 06:32:25.254272 1633651 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1216 06:32:25.254285 1633651 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1216 06:32:25.254290 1633651 command_runner.go:130] > #   the values from the default runtime on load time.
	I1216 06:32:25.254302 1633651 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1216 06:32:25.254311 1633651 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1216 06:32:25.254317 1633651 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1216 06:32:25.254340 1633651 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1216 06:32:25.254347 1633651 command_runner.go:130] > #   The currently recognized values are:
	I1216 06:32:25.254369 1633651 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1216 06:32:25.254378 1633651 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1216 06:32:25.254387 1633651 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1216 06:32:25.254393 1633651 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1216 06:32:25.254405 1633651 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1216 06:32:25.254419 1633651 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1216 06:32:25.254436 1633651 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1216 06:32:25.254450 1633651 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1216 06:32:25.254456 1633651 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1216 06:32:25.254476 1633651 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1216 06:32:25.254491 1633651 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1216 06:32:25.254498 1633651 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1216 06:32:25.254509 1633651 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1216 06:32:25.254520 1633651 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1216 06:32:25.254530 1633651 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1216 06:32:25.254561 1633651 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1216 06:32:25.254585 1633651 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1216 06:32:25.254596 1633651 command_runner.go:130] > #   deprecated option "conmon".
	I1216 06:32:25.254603 1633651 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1216 06:32:25.254613 1633651 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1216 06:32:25.254624 1633651 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1216 06:32:25.254629 1633651 command_runner.go:130] > #   should be moved to the container's cgroup
	I1216 06:32:25.254639 1633651 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1216 06:32:25.254660 1633651 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1216 06:32:25.254668 1633651 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1216 06:32:25.254672 1633651 command_runner.go:130] > #   conmon-rs by using:
	I1216 06:32:25.254689 1633651 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1216 06:32:25.254709 1633651 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1216 06:32:25.254724 1633651 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1216 06:32:25.254731 1633651 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1216 06:32:25.254739 1633651 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1216 06:32:25.254746 1633651 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1216 06:32:25.254767 1633651 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1216 06:32:25.254780 1633651 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1216 06:32:25.254799 1633651 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1216 06:32:25.254817 1633651 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1216 06:32:25.254822 1633651 command_runner.go:130] > #   when a machine crash happens.
	I1216 06:32:25.254829 1633651 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1216 06:32:25.254840 1633651 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1216 06:32:25.254848 1633651 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1216 06:32:25.254855 1633651 command_runner.go:130] > #   seccomp profile for the runtime.
	I1216 06:32:25.254861 1633651 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1216 06:32:25.254884 1633651 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1216 06:32:25.254894 1633651 command_runner.go:130] > #
	I1216 06:32:25.254899 1633651 command_runner.go:130] > # Using the seccomp notifier feature:
	I1216 06:32:25.254902 1633651 command_runner.go:130] > #
	I1216 06:32:25.254922 1633651 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1216 06:32:25.254936 1633651 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1216 06:32:25.254939 1633651 command_runner.go:130] > #
	I1216 06:32:25.254946 1633651 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1216 06:32:25.254954 1633651 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1216 06:32:25.254957 1633651 command_runner.go:130] > #
	I1216 06:32:25.254964 1633651 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1216 06:32:25.254970 1633651 command_runner.go:130] > # feature.
	I1216 06:32:25.254973 1633651 command_runner.go:130] > #
	I1216 06:32:25.254979 1633651 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1216 06:32:25.255001 1633651 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1216 06:32:25.255015 1633651 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1216 06:32:25.255021 1633651 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1216 06:32:25.255037 1633651 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1216 06:32:25.255046 1633651 command_runner.go:130] > #
	I1216 06:32:25.255053 1633651 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1216 06:32:25.255059 1633651 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1216 06:32:25.255065 1633651 command_runner.go:130] > #
	I1216 06:32:25.255071 1633651 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1216 06:32:25.255076 1633651 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1216 06:32:25.255079 1633651 command_runner.go:130] > #
	I1216 06:32:25.255089 1633651 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1216 06:32:25.255098 1633651 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1216 06:32:25.255116 1633651 command_runner.go:130] > # limitation.
	I1216 06:32:25.255127 1633651 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1216 06:32:25.255133 1633651 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1216 06:32:25.255143 1633651 command_runner.go:130] > runtime_type = ""
	I1216 06:32:25.255151 1633651 command_runner.go:130] > runtime_root = "/run/crun"
	I1216 06:32:25.255155 1633651 command_runner.go:130] > inherit_default_runtime = false
	I1216 06:32:25.255165 1633651 command_runner.go:130] > runtime_config_path = ""
	I1216 06:32:25.255174 1633651 command_runner.go:130] > container_min_memory = ""
	I1216 06:32:25.255210 1633651 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1216 06:32:25.255222 1633651 command_runner.go:130] > monitor_cgroup = "pod"
	I1216 06:32:25.255226 1633651 command_runner.go:130] > monitor_exec_cgroup = ""
	I1216 06:32:25.255231 1633651 command_runner.go:130] > allowed_annotations = [
	I1216 06:32:25.255235 1633651 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1216 06:32:25.255238 1633651 command_runner.go:130] > ]
	I1216 06:32:25.255247 1633651 command_runner.go:130] > privileged_without_host_devices = false
	I1216 06:32:25.255251 1633651 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1216 06:32:25.255267 1633651 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1216 06:32:25.255271 1633651 command_runner.go:130] > runtime_type = ""
	I1216 06:32:25.255274 1633651 command_runner.go:130] > runtime_root = "/run/runc"
	I1216 06:32:25.255290 1633651 command_runner.go:130] > inherit_default_runtime = false
	I1216 06:32:25.255300 1633651 command_runner.go:130] > runtime_config_path = ""
	I1216 06:32:25.255305 1633651 command_runner.go:130] > container_min_memory = ""
	I1216 06:32:25.255324 1633651 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1216 06:32:25.255354 1633651 command_runner.go:130] > monitor_cgroup = "pod"
	I1216 06:32:25.255360 1633651 command_runner.go:130] > monitor_exec_cgroup = ""
	I1216 06:32:25.255364 1633651 command_runner.go:130] > privileged_without_host_devices = false
	I1216 06:32:25.255371 1633651 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1216 06:32:25.255376 1633651 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1216 06:32:25.255383 1633651 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1216 06:32:25.255413 1633651 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1216 06:32:25.255438 1633651 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1216 06:32:25.255450 1633651 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1216 06:32:25.255462 1633651 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1216 06:32:25.255468 1633651 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1216 06:32:25.255478 1633651 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1216 06:32:25.255505 1633651 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1216 06:32:25.255522 1633651 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1216 06:32:25.255540 1633651 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1216 06:32:25.255551 1633651 command_runner.go:130] > # Example:
	I1216 06:32:25.255560 1633651 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1216 06:32:25.255569 1633651 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1216 06:32:25.255576 1633651 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1216 06:32:25.255584 1633651 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1216 06:32:25.255587 1633651 command_runner.go:130] > # cpuset = "0-1"
	I1216 06:32:25.255591 1633651 command_runner.go:130] > # cpushares = "5"
	I1216 06:32:25.255595 1633651 command_runner.go:130] > # cpuquota = "1000"
	I1216 06:32:25.255625 1633651 command_runner.go:130] > # cpuperiod = "100000"
	I1216 06:32:25.255636 1633651 command_runner.go:130] > # cpulimit = "35"
	I1216 06:32:25.255640 1633651 command_runner.go:130] > # Where:
	I1216 06:32:25.255645 1633651 command_runner.go:130] > # The workload name is workload-type.
	I1216 06:32:25.255652 1633651 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1216 06:32:25.255661 1633651 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1216 06:32:25.255667 1633651 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1216 06:32:25.255678 1633651 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1216 06:32:25.255686 1633651 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1216 06:32:25.255715 1633651 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1216 06:32:25.255733 1633651 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1216 06:32:25.255738 1633651 command_runner.go:130] > # Default value is set to true
	I1216 06:32:25.255749 1633651 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1216 06:32:25.255755 1633651 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1216 06:32:25.255760 1633651 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1216 06:32:25.255767 1633651 command_runner.go:130] > # Default value is set to 'false'
	I1216 06:32:25.255771 1633651 command_runner.go:130] > # disable_hostport_mapping = false
	I1216 06:32:25.255776 1633651 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1216 06:32:25.255807 1633651 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1216 06:32:25.255817 1633651 command_runner.go:130] > # timezone = ""
	I1216 06:32:25.255824 1633651 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1216 06:32:25.255830 1633651 command_runner.go:130] > #
	I1216 06:32:25.255836 1633651 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1216 06:32:25.255846 1633651 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1216 06:32:25.255850 1633651 command_runner.go:130] > [crio.image]
	I1216 06:32:25.255856 1633651 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1216 06:32:25.255866 1633651 command_runner.go:130] > # default_transport = "docker://"
	I1216 06:32:25.255888 1633651 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1216 06:32:25.255905 1633651 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1216 06:32:25.255915 1633651 command_runner.go:130] > # global_auth_file = ""
	I1216 06:32:25.255920 1633651 command_runner.go:130] > # The image used to instantiate infra containers.
	I1216 06:32:25.255925 1633651 command_runner.go:130] > # This option supports live configuration reload.
	I1216 06:32:25.255931 1633651 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1216 06:32:25.255940 1633651 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1216 06:32:25.255955 1633651 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1216 06:32:25.255961 1633651 command_runner.go:130] > # This option supports live configuration reload.
	I1216 06:32:25.255968 1633651 command_runner.go:130] > # pause_image_auth_file = ""
	I1216 06:32:25.255989 1633651 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1216 06:32:25.255997 1633651 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1216 06:32:25.256008 1633651 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1216 06:32:25.256014 1633651 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1216 06:32:25.256020 1633651 command_runner.go:130] > # pause_command = "/pause"
	I1216 06:32:25.256026 1633651 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1216 06:32:25.256032 1633651 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1216 06:32:25.256042 1633651 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1216 06:32:25.256057 1633651 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1216 06:32:25.256069 1633651 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1216 06:32:25.256085 1633651 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1216 06:32:25.256096 1633651 command_runner.go:130] > # pinned_images = [
	I1216 06:32:25.256100 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.256106 1633651 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1216 06:32:25.256116 1633651 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1216 06:32:25.256122 1633651 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1216 06:32:25.256131 1633651 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1216 06:32:25.256139 1633651 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1216 06:32:25.256144 1633651 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1216 06:32:25.256150 1633651 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1216 06:32:25.256179 1633651 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1216 06:32:25.256192 1633651 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1216 06:32:25.256207 1633651 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1216 06:32:25.256217 1633651 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1216 06:32:25.256222 1633651 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1216 06:32:25.256229 1633651 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1216 06:32:25.256238 1633651 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1216 06:32:25.256242 1633651 command_runner.go:130] > # changing them here.
	I1216 06:32:25.256266 1633651 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1216 06:32:25.256283 1633651 command_runner.go:130] > # insecure_registries = [
	I1216 06:32:25.256293 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.256303 1633651 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1216 06:32:25.256311 1633651 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1216 06:32:25.256321 1633651 command_runner.go:130] > # image_volumes = "mkdir"
	I1216 06:32:25.256331 1633651 command_runner.go:130] > # Temporary directory to use for storing big files
	I1216 06:32:25.256347 1633651 command_runner.go:130] > # big_files_temporary_dir = ""
	I1216 06:32:25.256360 1633651 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1216 06:32:25.256372 1633651 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1216 06:32:25.256380 1633651 command_runner.go:130] > # auto_reload_registries = false
	I1216 06:32:25.256386 1633651 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1216 06:32:25.256395 1633651 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1216 06:32:25.256404 1633651 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1216 06:32:25.256408 1633651 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1216 06:32:25.256422 1633651 command_runner.go:130] > # The mode of short name resolution.
	I1216 06:32:25.256436 1633651 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1216 06:32:25.256452 1633651 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1216 06:32:25.256479 1633651 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1216 06:32:25.256484 1633651 command_runner.go:130] > # short_name_mode = "enforcing"
	I1216 06:32:25.256490 1633651 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1216 06:32:25.256497 1633651 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1216 06:32:25.256512 1633651 command_runner.go:130] > # oci_artifact_mount_support = true
	I1216 06:32:25.256532 1633651 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1216 06:32:25.256544 1633651 command_runner.go:130] > # CNI plugins.
	I1216 06:32:25.256548 1633651 command_runner.go:130] > [crio.network]
	I1216 06:32:25.256566 1633651 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1216 06:32:25.256583 1633651 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1216 06:32:25.256590 1633651 command_runner.go:130] > # cni_default_network = ""
	I1216 06:32:25.256596 1633651 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1216 06:32:25.256603 1633651 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1216 06:32:25.256610 1633651 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1216 06:32:25.256626 1633651 command_runner.go:130] > # plugin_dirs = [
	I1216 06:32:25.256650 1633651 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1216 06:32:25.256654 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.256678 1633651 command_runner.go:130] > # List of included pod metrics.
	I1216 06:32:25.256691 1633651 command_runner.go:130] > # included_pod_metrics = [
	I1216 06:32:25.256695 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.256701 1633651 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1216 06:32:25.256708 1633651 command_runner.go:130] > [crio.metrics]
	I1216 06:32:25.256712 1633651 command_runner.go:130] > # Globally enable or disable metrics support.
	I1216 06:32:25.256717 1633651 command_runner.go:130] > # enable_metrics = false
	I1216 06:32:25.256723 1633651 command_runner.go:130] > # Specify enabled metrics collectors.
	I1216 06:32:25.256728 1633651 command_runner.go:130] > # Per default all metrics are enabled.
	I1216 06:32:25.256737 1633651 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1216 06:32:25.256762 1633651 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1216 06:32:25.256774 1633651 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1216 06:32:25.256778 1633651 command_runner.go:130] > # metrics_collectors = [
	I1216 06:32:25.256799 1633651 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1216 06:32:25.256808 1633651 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1216 06:32:25.256813 1633651 command_runner.go:130] > # 	"containers_oom_total",
	I1216 06:32:25.256818 1633651 command_runner.go:130] > # 	"processes_defunct",
	I1216 06:32:25.256829 1633651 command_runner.go:130] > # 	"operations_total",
	I1216 06:32:25.256834 1633651 command_runner.go:130] > # 	"operations_latency_seconds",
	I1216 06:32:25.256839 1633651 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1216 06:32:25.256842 1633651 command_runner.go:130] > # 	"operations_errors_total",
	I1216 06:32:25.256847 1633651 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1216 06:32:25.256851 1633651 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1216 06:32:25.256855 1633651 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1216 06:32:25.256869 1633651 command_runner.go:130] > # 	"image_pulls_success_total",
	I1216 06:32:25.256888 1633651 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1216 06:32:25.256897 1633651 command_runner.go:130] > # 	"containers_oom_count_total",
	I1216 06:32:25.256901 1633651 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1216 06:32:25.256906 1633651 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1216 06:32:25.256913 1633651 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1216 06:32:25.256916 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.256923 1633651 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1216 06:32:25.256930 1633651 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1216 06:32:25.256944 1633651 command_runner.go:130] > # The port on which the metrics server will listen.
	I1216 06:32:25.256952 1633651 command_runner.go:130] > # metrics_port = 9090
	I1216 06:32:25.256958 1633651 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1216 06:32:25.256967 1633651 command_runner.go:130] > # metrics_socket = ""
	I1216 06:32:25.256972 1633651 command_runner.go:130] > # The certificate for the secure metrics server.
	I1216 06:32:25.256979 1633651 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1216 06:32:25.256987 1633651 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1216 06:32:25.257000 1633651 command_runner.go:130] > # certificate on any modification event.
	I1216 06:32:25.257004 1633651 command_runner.go:130] > # metrics_cert = ""
	I1216 06:32:25.257023 1633651 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1216 06:32:25.257034 1633651 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1216 06:32:25.257039 1633651 command_runner.go:130] > # metrics_key = ""
	I1216 06:32:25.257061 1633651 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1216 06:32:25.257070 1633651 command_runner.go:130] > [crio.tracing]
	I1216 06:32:25.257076 1633651 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1216 06:32:25.257080 1633651 command_runner.go:130] > # enable_tracing = false
	I1216 06:32:25.257088 1633651 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1216 06:32:25.257099 1633651 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1216 06:32:25.257111 1633651 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1216 06:32:25.257127 1633651 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1216 06:32:25.257138 1633651 command_runner.go:130] > # CRI-O NRI configuration.
	I1216 06:32:25.257142 1633651 command_runner.go:130] > [crio.nri]
	I1216 06:32:25.257156 1633651 command_runner.go:130] > # Globally enable or disable NRI.
	I1216 06:32:25.257167 1633651 command_runner.go:130] > # enable_nri = true
	I1216 06:32:25.257172 1633651 command_runner.go:130] > # NRI socket to listen on.
	I1216 06:32:25.257181 1633651 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1216 06:32:25.257193 1633651 command_runner.go:130] > # NRI plugin directory to use.
	I1216 06:32:25.257198 1633651 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1216 06:32:25.257205 1633651 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1216 06:32:25.257210 1633651 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1216 06:32:25.257218 1633651 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1216 06:32:25.257323 1633651 command_runner.go:130] > # nri_disable_connections = false
	I1216 06:32:25.257337 1633651 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1216 06:32:25.257342 1633651 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1216 06:32:25.257358 1633651 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1216 06:32:25.257370 1633651 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1216 06:32:25.257375 1633651 command_runner.go:130] > # NRI default validator configuration.
	I1216 06:32:25.257383 1633651 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1216 06:32:25.257393 1633651 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1216 06:32:25.257397 1633651 command_runner.go:130] > # can be restricted/rejected:
	I1216 06:32:25.257403 1633651 command_runner.go:130] > # - OCI hook injection
	I1216 06:32:25.257409 1633651 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1216 06:32:25.257417 1633651 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1216 06:32:25.257431 1633651 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1216 06:32:25.257443 1633651 command_runner.go:130] > # - adjustment of linux namespaces
	I1216 06:32:25.257465 1633651 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1216 06:32:25.257479 1633651 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1216 06:32:25.257485 1633651 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1216 06:32:25.257493 1633651 command_runner.go:130] > #
	I1216 06:32:25.257498 1633651 command_runner.go:130] > # [crio.nri.default_validator]
	I1216 06:32:25.257503 1633651 command_runner.go:130] > # nri_enable_default_validator = false
	I1216 06:32:25.257510 1633651 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1216 06:32:25.257516 1633651 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1216 06:32:25.257522 1633651 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1216 06:32:25.257549 1633651 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1216 06:32:25.257562 1633651 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1216 06:32:25.257568 1633651 command_runner.go:130] > # nri_validator_required_plugins = [
	I1216 06:32:25.257574 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.257593 1633651 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1216 06:32:25.257604 1633651 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1216 06:32:25.257609 1633651 command_runner.go:130] > [crio.stats]
	I1216 06:32:25.257639 1633651 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1216 06:32:25.257651 1633651 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1216 06:32:25.257655 1633651 command_runner.go:130] > # stats_collection_period = 0
	I1216 06:32:25.257662 1633651 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1216 06:32:25.257671 1633651 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1216 06:32:25.257675 1633651 command_runner.go:130] > # collection_period = 0
	I1216 06:32:25.259482 1633651 command_runner.go:130] ! time="2025-12-16T06:32:25.219727326Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1216 06:32:25.259512 1633651 command_runner.go:130] ! time="2025-12-16T06:32:25.219767515Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1216 06:32:25.259524 1633651 command_runner.go:130] ! time="2025-12-16T06:32:25.219798038Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1216 06:32:25.259536 1633651 command_runner.go:130] ! time="2025-12-16T06:32:25.219823548Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1216 06:32:25.259545 1633651 command_runner.go:130] ! time="2025-12-16T06:32:25.219901653Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:32:25.259556 1633651 command_runner.go:130] ! time="2025-12-16T06:32:25.220263616Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1216 06:32:25.259571 1633651 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1216 06:32:25.260036 1633651 cni.go:84] Creating CNI manager for ""
	I1216 06:32:25.260064 1633651 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 06:32:25.260092 1633651 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 06:32:25.260122 1633651 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-364120 NodeName:functional-364120 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 06:32:25.260297 1633651 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-364120"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 06:32:25.260383 1633651 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 06:32:25.268343 1633651 command_runner.go:130] > kubeadm
	I1216 06:32:25.268362 1633651 command_runner.go:130] > kubectl
	I1216 06:32:25.268366 1633651 command_runner.go:130] > kubelet
	I1216 06:32:25.268406 1633651 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 06:32:25.268462 1633651 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 06:32:25.276071 1633651 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1216 06:32:25.288575 1633651 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 06:32:25.300994 1633651 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1216 06:32:25.313670 1633651 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 06:32:25.317448 1633651 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1216 06:32:25.317550 1633651 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:32:25.453328 1633651 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:32:26.148228 1633651 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120 for IP: 192.168.49.2
	I1216 06:32:26.148252 1633651 certs.go:195] generating shared ca certs ...
	I1216 06:32:26.148269 1633651 certs.go:227] acquiring lock for ca certs: {Name:mkbf72d2e438185e2867d262e148d82e5455cccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:32:26.148410 1633651 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key
	I1216 06:32:26.148482 1633651 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key
	I1216 06:32:26.148493 1633651 certs.go:257] generating profile certs ...
	I1216 06:32:26.148601 1633651 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.key
	I1216 06:32:26.148663 1633651 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.key.a6be103a
	I1216 06:32:26.148727 1633651 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.key
	I1216 06:32:26.148740 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1216 06:32:26.148753 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1216 06:32:26.148765 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1216 06:32:26.148785 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1216 06:32:26.148802 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1216 06:32:26.148814 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1216 06:32:26.148830 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1216 06:32:26.148841 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1216 06:32:26.148892 1633651 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem (1338 bytes)
	W1216 06:32:26.148927 1633651 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255_empty.pem, impossibly tiny 0 bytes
	I1216 06:32:26.148935 1633651 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 06:32:26.148966 1633651 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem (1078 bytes)
	I1216 06:32:26.148996 1633651 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem (1123 bytes)
	I1216 06:32:26.149023 1633651 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem (1675 bytes)
	I1216 06:32:26.149078 1633651 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 06:32:26.149109 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> /usr/share/ca-certificates/15992552.pem
	I1216 06:32:26.149127 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:32:26.149143 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem -> /usr/share/ca-certificates/1599255.pem
	I1216 06:32:26.149727 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 06:32:26.167732 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 06:32:26.185872 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 06:32:26.203036 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 06:32:26.220347 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 06:32:26.238248 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 06:32:26.255572 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 06:32:26.272719 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 06:32:26.290975 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /usr/share/ca-certificates/15992552.pem (1708 bytes)
	I1216 06:32:26.308752 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 06:32:26.326261 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem --> /usr/share/ca-certificates/1599255.pem (1338 bytes)
	I1216 06:32:26.344085 1633651 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 06:32:26.357043 1633651 ssh_runner.go:195] Run: openssl version
	I1216 06:32:26.362895 1633651 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1216 06:32:26.363366 1633651 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15992552.pem
	I1216 06:32:26.370980 1633651 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15992552.pem /etc/ssl/certs/15992552.pem
	I1216 06:32:26.378519 1633651 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15992552.pem
	I1216 06:32:26.382213 1633651 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 16 06:24 /usr/share/ca-certificates/15992552.pem
	I1216 06:32:26.382261 1633651 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 06:24 /usr/share/ca-certificates/15992552.pem
	I1216 06:32:26.382313 1633651 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15992552.pem
	I1216 06:32:26.422786 1633651 command_runner.go:130] > 3ec20f2e
	I1216 06:32:26.423247 1633651 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 06:32:26.430703 1633651 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:32:26.437977 1633651 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 06:32:26.445376 1633651 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:32:26.449306 1633651 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 16 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:32:26.449352 1633651 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:32:26.449400 1633651 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:32:26.489732 1633651 command_runner.go:130] > b5213941
	I1216 06:32:26.490221 1633651 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 06:32:26.498231 1633651 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1599255.pem
	I1216 06:32:26.505778 1633651 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1599255.pem /etc/ssl/certs/1599255.pem
	I1216 06:32:26.513624 1633651 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1599255.pem
	I1216 06:32:26.517603 1633651 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 16 06:24 /usr/share/ca-certificates/1599255.pem
	I1216 06:32:26.517655 1633651 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 06:24 /usr/share/ca-certificates/1599255.pem
	I1216 06:32:26.517708 1633651 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1599255.pem
	I1216 06:32:26.558501 1633651 command_runner.go:130] > 51391683
	I1216 06:32:26.558962 1633651 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 06:32:26.566709 1633651 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 06:32:26.570687 1633651 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 06:32:26.570714 1633651 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1216 06:32:26.570721 1633651 command_runner.go:130] > Device: 259,1	Inode: 1064557     Links: 1
	I1216 06:32:26.570728 1633651 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1216 06:32:26.570734 1633651 command_runner.go:130] > Access: 2025-12-16 06:28:17.989070314 +0000
	I1216 06:32:26.570739 1633651 command_runner.go:130] > Modify: 2025-12-16 06:24:14.133380006 +0000
	I1216 06:32:26.570745 1633651 command_runner.go:130] > Change: 2025-12-16 06:24:14.133380006 +0000
	I1216 06:32:26.570750 1633651 command_runner.go:130] >  Birth: 2025-12-16 06:24:14.133380006 +0000
	I1216 06:32:26.570807 1633651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 06:32:26.611178 1633651 command_runner.go:130] > Certificate will not expire
	I1216 06:32:26.611643 1633651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 06:32:26.653044 1633651 command_runner.go:130] > Certificate will not expire
	I1216 06:32:26.653496 1633651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 06:32:26.693948 1633651 command_runner.go:130] > Certificate will not expire
	I1216 06:32:26.694452 1633651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 06:32:26.737177 1633651 command_runner.go:130] > Certificate will not expire
	I1216 06:32:26.737685 1633651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 06:32:26.777863 1633651 command_runner.go:130] > Certificate will not expire
	I1216 06:32:26.778315 1633651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 06:32:26.821770 1633651 command_runner.go:130] > Certificate will not expire
	I1216 06:32:26.822198 1633651 kubeadm.go:401] StartCluster: {Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:32:26.822282 1633651 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 06:32:26.822342 1633651 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 06:32:26.848560 1633651 cri.go:89] found id: ""
	I1216 06:32:26.848631 1633651 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 06:32:26.856311 1633651 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1216 06:32:26.856334 1633651 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1216 06:32:26.856341 1633651 command_runner.go:130] > /var/lib/minikube/etcd:
	I1216 06:32:26.856353 1633651 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 06:32:26.856377 1633651 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 06:32:26.856451 1633651 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 06:32:26.863716 1633651 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 06:32:26.864139 1633651 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-364120" does not appear in /home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:32:26.864257 1633651 kubeconfig.go:62] /home/jenkins/minikube-integration/22141-1596013/kubeconfig needs updating (will repair): [kubeconfig missing "functional-364120" cluster setting kubeconfig missing "functional-364120" context setting]
	I1216 06:32:26.864570 1633651 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/kubeconfig: {Name:mk61a8e87d869d27c5acc78145bae6b02a8088a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:32:26.865235 1633651 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:32:26.865467 1633651 kapi.go:59] client config for functional-364120: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.crt", KeyFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.key", CAFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 06:32:26.866570 1633651 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1216 06:32:26.866631 1633651 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1216 06:32:26.866668 1633651 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 06:32:26.866693 1633651 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1216 06:32:26.866720 1633651 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 06:32:26.867179 1633651 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 06:32:26.868151 1633651 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1216 06:32:26.877051 1633651 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1216 06:32:26.877090 1633651 kubeadm.go:602] duration metric: took 20.700092ms to restartPrimaryControlPlane
	I1216 06:32:26.877101 1633651 kubeadm.go:403] duration metric: took 54.908954ms to StartCluster
	I1216 06:32:26.877118 1633651 settings.go:142] acquiring lock: {Name:mk011eec7aa10b3db81dce3dc7edf51f985e2ce2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:32:26.877187 1633651 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:32:26.877859 1633651 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/kubeconfig: {Name:mk61a8e87d869d27c5acc78145bae6b02a8088a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:32:26.878064 1633651 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 06:32:26.878625 1633651 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 06:32:26.878682 1633651 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 06:32:26.878749 1633651 addons.go:70] Setting storage-provisioner=true in profile "functional-364120"
	I1216 06:32:26.878762 1633651 addons.go:239] Setting addon storage-provisioner=true in "functional-364120"
	I1216 06:32:26.878787 1633651 host.go:66] Checking if "functional-364120" exists ...
	I1216 06:32:26.879288 1633651 cli_runner.go:164] Run: docker container inspect functional-364120 --format={{.State.Status}}
	I1216 06:32:26.879473 1633651 addons.go:70] Setting default-storageclass=true in profile "functional-364120"
	I1216 06:32:26.879497 1633651 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-364120"
	I1216 06:32:26.879803 1633651 cli_runner.go:164] Run: docker container inspect functional-364120 --format={{.State.Status}}
	I1216 06:32:26.884633 1633651 out.go:179] * Verifying Kubernetes components...
	I1216 06:32:26.887314 1633651 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:32:26.918200 1633651 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 06:32:26.919874 1633651 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:32:26.920155 1633651 kapi.go:59] client config for functional-364120: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.crt", KeyFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.key", CAFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 06:32:26.920453 1633651 addons.go:239] Setting addon default-storageclass=true in "functional-364120"
	I1216 06:32:26.920538 1633651 host.go:66] Checking if "functional-364120" exists ...
	I1216 06:32:26.920986 1633651 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:26.921004 1633651 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 06:32:26.921061 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:26.921340 1633651 cli_runner.go:164] Run: docker container inspect functional-364120 --format={{.State.Status}}
	I1216 06:32:26.964659 1633651 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:26.964697 1633651 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 06:32:26.964756 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:26.965286 1633651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:32:26.998084 1633651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:32:27.098293 1633651 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:32:27.125997 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:27.132422 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:27.897996 1633651 node_ready.go:35] waiting up to 6m0s for node "functional-364120" to be "Ready" ...
	I1216 06:32:27.898129 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:27.898194 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:27.898417 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:27.898455 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:27.898484 1633651 retry.go:31] will retry after 293.203887ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:27.898523 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:27.898548 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:27.898555 1633651 retry.go:31] will retry after 361.667439ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:27.898617 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:28.192028 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:28.251245 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:28.251292 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:28.251318 1633651 retry.go:31] will retry after 421.770055ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:28.261399 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:28.326104 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:28.326166 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:28.326190 1633651 retry.go:31] will retry after 230.03946ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:28.398272 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:28.398369 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:28.398664 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:28.557150 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:28.610627 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:28.614370 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:28.614405 1633651 retry.go:31] will retry after 431.515922ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:28.673577 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:28.751124 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:28.751167 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:28.751187 1633651 retry.go:31] will retry after 416.921651ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:28.898406 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:28.898526 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:28.898876 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:29.046157 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:29.107254 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:29.107314 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:29.107371 1633651 retry.go:31] will retry after 899.303578ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:29.168518 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:29.225793 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:29.229337 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:29.229371 1633651 retry.go:31] will retry after 758.152445ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:29.398643 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:29.398767 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:29.399082 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:29.898862 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:29.898939 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:29.899317 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:29.899390 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:29.988648 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:30.011610 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:30.113177 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:30.113245 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:30.113269 1633651 retry.go:31] will retry after 739.984539ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:30.134431 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:30.134488 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:30.134525 1633651 retry.go:31] will retry after 743.078754ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:30.398873 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:30.398944 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:30.399345 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:30.854128 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:30.878717 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:30.899202 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:30.899283 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:30.899567 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:30.948589 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:30.948629 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:30.948651 1633651 retry.go:31] will retry after 2.54132752s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:30.989038 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:30.989082 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:30.989107 1633651 retry.go:31] will retry after 1.925489798s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:31.398656 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:31.398729 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:31.399083 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:31.898637 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:31.898714 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:31.899058 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:32.398954 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:32.399038 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:32.399384 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:32.399469 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:32.898198 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:32.898298 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:32.898691 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:32.914948 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:32.974729 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:32.974766 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:32.974784 1633651 retry.go:31] will retry after 2.13279976s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:33.398213 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:33.398308 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:33.398682 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:33.491042 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:33.546485 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:33.550699 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:33.550734 1633651 retry.go:31] will retry after 1.927615537s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:33.899219 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:33.899329 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:33.899638 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:34.398293 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:34.398367 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:34.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:34.898296 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:34.898376 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:34.898683 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:34.898732 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:35.108136 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:35.168080 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:35.168179 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:35.168237 1633651 retry.go:31] will retry after 2.609957821s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:35.398216 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:35.398310 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:35.398589 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:35.478854 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:35.539410 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:35.539453 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:35.539472 1633651 retry.go:31] will retry after 2.66810674s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:35.898940 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:35.899019 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:35.899395 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:36.399231 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:36.399312 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:36.399638 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:36.898470 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:36.898542 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:36.898811 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:36.898864 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:37.398807 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:37.398884 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:37.399243 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:37.778747 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:37.833515 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:37.837237 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:37.837278 1633651 retry.go:31] will retry after 4.537651284s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:37.898560 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:37.898639 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:37.898976 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:38.208455 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:38.268308 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:38.268354 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:38.268373 1633651 retry.go:31] will retry after 8.612374195s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:38.398733 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:38.398807 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:38.399077 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:38.899000 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:38.899085 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:38.899556 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:38.899628 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:39.398306 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:39.398389 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:39.398769 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:39.898353 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:39.898421 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:39.898737 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:40.398303 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:40.398378 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:40.398718 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:40.898499 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:40.898578 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:40.898878 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:41.398243 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:41.398320 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:41.398608 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:41.398654 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:41.898265 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:41.898352 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:41.898706 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:42.375464 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:42.399185 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:42.399260 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:42.399531 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:42.439480 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:42.439520 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:42.439538 1633651 retry.go:31] will retry after 13.723834965s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:42.899110 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:42.899183 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:42.899457 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:43.398171 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:43.398246 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:43.398594 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:43.898302 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:43.898384 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:43.898716 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:43.898766 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:44.398246 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:44.398336 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:44.398652 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:44.898379 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:44.898453 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:44.898773 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:45.398303 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:45.398383 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:45.398795 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:45.898225 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:45.898296 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:45.898604 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:46.398309 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:46.398384 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:46.398732 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:46.398787 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:46.881536 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:46.898964 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:46.899056 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:46.899361 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:46.940375 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:46.943961 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:46.943995 1633651 retry.go:31] will retry after 5.072276608s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:47.398701 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:47.398787 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:47.399064 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:47.898839 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:47.898914 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:47.899236 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:48.398915 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:48.398993 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:48.399340 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:48.399397 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:48.898996 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:48.899069 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:48.899401 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:49.399214 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:49.399301 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:49.399707 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:49.898281 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:49.898365 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:49.898709 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:50.398392 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:50.398466 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:50.398735 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:50.898279 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:50.898378 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:50.898713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:50.898770 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:51.398286 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:51.398367 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:51.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:51.898253 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:51.898327 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:51.898592 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:52.017198 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:52.080330 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:52.080367 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:52.080387 1633651 retry.go:31] will retry after 19.488213597s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:52.398170 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:52.398254 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:52.398603 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:52.898357 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:52.898430 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:52.898751 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:52.898809 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:53.398443 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:53.398509 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:53.398780 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:53.898306 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:53.898387 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:53.898746 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:54.398455 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:54.398531 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:54.398859 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:54.898536 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:54.898616 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:54.898937 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:54.899000 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:55.398275 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:55.398355 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:55.398711 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:55.898280 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:55.898356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:55.898712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:56.164267 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:56.225232 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:56.225280 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:56.225300 1633651 retry.go:31] will retry after 14.108855756s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:56.398529 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:56.398594 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:56.398865 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:56.898855 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:56.898932 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:56.899282 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:56.899334 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:57.399213 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:57.399288 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:57.399591 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:57.898226 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:57.898296 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:57.898568 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:58.398287 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:58.398378 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:58.398747 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:58.898457 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:58.898545 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:58.898936 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:59.398231 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:59.398328 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:59.398650 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:59.398702 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:59.898313 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:59.898388 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:59.898742 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:00.398460 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:00.398541 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:00.398851 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:00.898739 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:00.898816 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:00.899097 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:01.398863 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:01.398936 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:01.399252 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:01.399305 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:01.898923 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:01.899005 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:01.899364 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:02.399175 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:02.399247 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:02.399610 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:02.898189 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:02.898266 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:02.898584 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:03.398333 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:03.398410 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:03.398779 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:03.898460 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:03.898527 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:03.898800 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:03.898847 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:04.398287 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:04.398376 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:04.398745 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:04.898458 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:04.898534 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:04.898848 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:05.398531 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:05.398614 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:05.398881 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:05.898633 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:05.898709 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:05.899055 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:05.899137 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:06.398909 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:06.398987 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:06.399357 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:06.898176 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:06.898262 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:06.898675 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:07.398306 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:07.398386 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:07.398760 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:07.898344 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:07.898420 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:07.898721 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:08.398282 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:08.398349 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:08.398667 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:08.398725 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:08.898267 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:08.898349 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:08.898696 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:09.398398 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:09.398479 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:09.398785 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:09.898336 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:09.898404 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:09.898666 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:10.335122 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:33:10.396460 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:33:10.396519 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:33:10.396538 1633651 retry.go:31] will retry after 12.344116424s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:33:10.398561 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:10.398627 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:10.398890 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:10.398937 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:10.898605 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:10.898693 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:10.899053 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:11.398802 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:11.398885 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:11.399176 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:11.569711 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:33:11.631078 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:33:11.634606 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:33:11.634637 1633651 retry.go:31] will retry after 14.712851021s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:33:11.899031 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:11.899113 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:11.899432 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:12.398254 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:12.398360 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:12.398690 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:12.898240 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:12.898312 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:12.898566 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:12.898607 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:13.398287 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:13.398402 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:13.398698 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:13.898274 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:13.898358 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:13.898708 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:14.398404 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:14.398483 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:14.398747 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:14.898318 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:14.898393 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:14.898689 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:14.898742 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:15.398247 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:15.398323 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:15.398677 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:15.898226 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:15.898327 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:15.898637 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:16.398242 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:16.398318 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:16.398644 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:16.898635 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:16.898716 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:16.899100 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:16.899164 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:17.398918 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:17.399005 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:17.399287 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:17.899071 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:17.899230 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:17.899613 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:18.398204 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:18.398291 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:18.398652 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:18.898350 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:18.898425 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:18.898684 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:19.398280 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:19.398356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:19.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:19.398764 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:19.898239 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:19.898318 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:19.898648 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:20.398232 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:20.398306 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:20.398616 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:20.898284 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:20.898360 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:20.898678 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:21.398294 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:21.398375 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:21.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:21.898275 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:21.898388 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:21.898665 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:21.898722 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:22.398602 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:22.398676 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:22.399053 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:22.741700 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:33:22.805176 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:33:22.805212 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:33:22.805230 1633651 retry.go:31] will retry after 37.521073757s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:33:22.898475 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:22.898570 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:22.898876 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:23.398233 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:23.398311 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:23.398648 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:23.898274 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:23.898357 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:23.898694 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:23.898753 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:24.398440 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:24.398517 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:24.398859 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:24.898547 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:24.898618 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:24.898926 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:25.398269 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:25.398343 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:25.398672 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:25.898264 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:25.898341 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:25.898639 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:26.348396 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:33:26.398844 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:26.398921 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:26.399279 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:26.399329 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:26.417393 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:33:26.417436 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:33:26.417455 1633651 retry.go:31] will retry after 31.35447413s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:33:26.898149 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:26.898223 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:26.898585 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:27.398341 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:27.398414 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:27.398760 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:27.898330 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:27.898422 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:27.898845 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:28.398266 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:28.398345 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:28.398712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:28.898417 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:28.898496 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:28.898819 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:28.898872 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:29.398235 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:29.398307 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:29.398632 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:29.898239 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:29.898320 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:29.898683 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:30.398392 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:30.398475 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:30.398830 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:30.898474 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:30.898549 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:30.898811 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:31.398256 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:31.398330 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:31.398672 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:31.398725 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:31.898251 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:31.898324 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:31.898636 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:32.398372 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:32.398442 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:32.398727 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:32.898400 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:32.898485 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:32.898850 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:33.398289 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:33.398371 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:33.398711 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:33.398769 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:33.898438 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:33.898505 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:33.898773 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:34.398438 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:34.398516 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:34.398867 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:34.898456 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:34.898537 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:34.898909 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:35.398591 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:35.398658 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:35.398916 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:35.398977 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:35.898278 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:35.898358 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:35.898703 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:36.398279 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:36.398364 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:36.398729 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:36.898728 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:36.898803 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:36.899137 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:37.399202 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:37.399278 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:37.399639 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:37.399694 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:37.898374 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:37.898455 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:37.898821 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:38.398505 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:38.398571 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:38.398855 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:38.898265 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:38.898344 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:38.898677 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:39.398411 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:39.398486 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:39.398839 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:39.898222 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:39.898300 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:39.898615 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:39.898667 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:40.398263 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:40.398339 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:40.398681 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:40.898277 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:40.898359 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:40.898740 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:41.398462 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:41.398529 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:41.398809 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:41.898281 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:41.898354 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:41.898706 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:41.898766 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:42.398755 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:42.398839 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:42.399236 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:42.898983 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:42.899053 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:42.899331 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:43.398183 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:43.398258 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:43.398591 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:43.898308 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:43.898391 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:43.898742 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:44.398253 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:44.398321 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:44.398580 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:44.398622 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:44.898252 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:44.898325 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:44.898659 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:45.398342 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:45.398448 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:45.398787 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:45.898225 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:45.898296 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:45.898628 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:46.398273 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:46.398350 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:46.398686 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:46.398739 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:46.898513 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:46.898594 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:46.898959 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:47.398772 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:47.398859 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:47.399168 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:47.898938 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:47.899012 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:47.899377 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:48.399044 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:48.399126 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:48.399458 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:48.399514 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:48.898185 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:48.898255 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:48.898520 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:49.398231 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:49.398311 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:49.398630 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:49.898360 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:49.898434 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:49.898761 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:50.398248 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:50.398333 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:50.398656 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:50.898256 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:50.898329 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:50.898694 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:50.898756 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:51.398426 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:51.398503 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:51.398913 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:51.898663 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:51.898743 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:51.899196 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:52.398565 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:52.398648 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:52.399111 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:52.898692 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:52.898773 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:52.899132 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:52.899190 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:53.398951 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:53.399065 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:53.399370 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:53.898173 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:53.898248 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:53.898623 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:54.398283 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:54.398370 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:54.398682 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:54.898239 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:54.898312 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:54.898573 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:55.398246 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:55.398320 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:55.398650 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:55.398707 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:55.898264 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:55.898343 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:55.898683 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:56.398258 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:56.398333 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:56.398586 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:56.898628 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:56.898703 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:56.899073 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:57.398945 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:57.399019 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:57.399371 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:57.399427 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:57.772952 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:33:57.834039 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:33:57.837641 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:33:57.837741 1633651 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 06:33:57.899083 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:57.899158 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:57.899422 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:58.398161 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:58.398242 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:58.398586 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:58.898310 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:58.898386 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:58.898742 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:59.398422 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:59.398493 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:59.398756 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:59.898258 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:59.898331 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:59.898686 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:59.898740 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:00.327789 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:34:00.398990 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:00.399071 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:00.399382 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:00.427909 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:34:00.431971 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:34:00.432103 1633651 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 06:34:00.437092 1633651 out.go:179] * Enabled addons: 
	I1216 06:34:00.440884 1633651 addons.go:530] duration metric: took 1m33.562192947s for enable addons: enabled=[]
	I1216 06:34:00.898292 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:00.898392 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:00.898707 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:01.398307 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:01.398389 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:01.398711 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:01.898244 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:01.898311 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:01.898577 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:02.398409 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:02.398488 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:02.398818 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:02.398876 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:02.898375 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:02.898452 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:02.898792 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:03.398249 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:03.398319 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:03.398577 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:03.898262 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:03.898340 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:03.898676 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:04.398263 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:04.398335 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:04.398654 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:04.898325 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:04.898400 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:04.898742 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:04.898801 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:05.398291 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:05.398382 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:05.398957 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:05.898686 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:05.898768 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:05.899122 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:06.398925 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:06.399010 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:06.399354 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:06.898972 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:06.899043 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:06.899401 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:06.899475 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:07.399211 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:07.399289 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:07.399665 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:07.898337 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:07.898421 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:07.898704 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:08.398265 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:08.398348 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:08.398682 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:08.898384 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:08.898460 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:08.898748 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:09.399015 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:09.399090 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:09.399360 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:09.399412 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:09.899197 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:09.899275 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:09.899628 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:10.398251 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:10.398324 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:10.398662 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:10.898348 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:10.898422 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:10.898716 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:11.398281 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:11.398362 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:11.398704 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:11.898290 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:11.898370 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:11.898687 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:11.898743 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:12.398541 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:12.398609 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:12.398881 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:12.898637 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:12.898723 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:12.899079 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:13.398865 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:13.398945 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:13.399273 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:13.899072 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:13.899151 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:13.899501 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:13.899561 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:14.398235 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:14.398316 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:14.398658 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:14.898363 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:14.898442 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:14.898813 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:15.398508 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:15.398583 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:15.398859 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:15.898280 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:15.898359 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:15.898662 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:16.398298 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:16.398373 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:16.398713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:16.398775 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:16.898203 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:16.898272 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:16.898528 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:17.398515 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:17.398598 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:17.398936 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:17.898289 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:17.898362 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:17.898713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:18.398422 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:18.398498 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:18.398771 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:18.398820 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:18.898251 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:18.898331 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:18.898653 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:19.398357 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:19.398446 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:19.398791 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:19.898510 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:19.898589 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:19.898872 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:20.398266 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:20.398359 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:20.398763 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:20.898254 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:20.898340 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:20.898695 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:20.898758 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:21.398239 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:21.398316 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:21.398590 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:21.898277 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:21.898350 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:21.898851 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:22.398811 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:22.398886 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:22.399204 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:22.898972 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:22.899048 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:22.899306 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:22.899351 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:23.399107 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:23.399181 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:23.399518 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:23.898252 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:23.898332 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:23.898659 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:24.398280 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:24.398364 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:24.398714 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:24.898279 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:24.898358 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:24.898719 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:25.398435 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:25.398518 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:25.398899 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:25.398964 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:25.898643 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:25.898718 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:25.898991 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:26.398257 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:26.398331 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:26.398659 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:26.898526 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:26.898612 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:26.899075 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:27.398249 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:27.398364 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:27.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:27.898275 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:27.898350 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:27.898713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:27.898798 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:28.398464 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:28.398539 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:28.398917 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:28.898624 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:28.898699 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:28.899014 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:29.398802 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:29.398878 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:29.399221 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:29.898995 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:29.899075 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:29.899431 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:29.899497 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:30.398215 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:30.398295 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:30.398549 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:30.898232 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:30.898309 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:30.898674 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:31.398411 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:31.398493 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:31.398835 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:31.898249 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:31.898315 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:31.898624 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:32.398296 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:32.398371 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:32.398696 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:32.398762 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:32.898447 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:32.898526 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:32.898844 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:33.398245 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:33.398318 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:33.398582 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:33.898259 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:33.898355 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:33.898652 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:34.398311 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:34.398386 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:34.398737 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:34.398791 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:34.898273 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:34.898347 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:34.898671 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:35.398244 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:35.398321 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:35.398665 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:35.898264 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:35.898348 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:35.898663 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:36.398349 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:36.398430 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:36.398756 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:36.898879 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:36.898962 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:36.899298 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:36.899363 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:37.398940 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:37.399018 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:37.399339 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:37.899128 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:37.899202 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:37.899475 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:38.398196 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:38.398276 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:38.398617 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:38.898346 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:38.898424 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:38.898788 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:39.398232 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:39.398304 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:39.398637 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:39.398705 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:39.898341 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:39.898419 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:39.898791 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:40.398499 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:40.398574 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:40.398963 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:40.898635 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:40.898719 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:40.899009 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:41.398866 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:41.398958 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:41.399281 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:41.399336 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:41.899108 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:41.899190 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:41.899541 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:42.398226 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:42.398314 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:42.398588 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:42.898199 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:42.898320 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:42.898686 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:43.398433 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:43.398510 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:43.398874 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:43.898570 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:43.898642 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:43.898913 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:43.898966 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:44.398296 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:44.398371 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:44.398701 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:44.898279 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:44.898356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:44.898693 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:45.398553 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:45.398755 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:45.399042 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:45.898881 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:45.898964 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:45.899318 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:45.899373 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:46.399167 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:46.399253 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:46.399612 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:46.898505 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:46.898584 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:46.898871 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:47.399034 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:47.399118 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:47.399524 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:47.898288 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:47.898367 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:47.898724 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:48.398399 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:48.398476 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:48.398811 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:48.398865 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:48.898261 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:48.898347 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:48.898763 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:49.398479 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:49.398552 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:49.398921 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:49.898219 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:49.898296 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:49.898632 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:50.398260 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:50.398336 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:50.398681 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:50.898398 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:50.898476 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:50.898813 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:50.898869 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:51.398266 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:51.398349 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:51.398666 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:51.898271 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:51.898358 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:51.898714 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:52.398270 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:52.398364 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:52.398695 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:52.898254 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:52.898333 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:52.898667 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:53.398349 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:53.398437 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:53.398787 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:53.398846 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:53.898285 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:53.898362 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:53.898735 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:54.398284 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:54.398352 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:54.398640 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:54.898273 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:54.898356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:54.898672 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:55.398276 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:55.398360 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:55.398754 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:55.898236 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:55.898309 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:55.898640 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:55.898694 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:56.398347 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:56.398429 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:56.398783 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:56.898669 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:56.898747 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:56.899097 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:57.399054 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:57.399128 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:57.399397 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:57.898166 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:57.898252 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:57.898582 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:58.398281 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:58.398356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:58.398693 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:58.398750 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:58.898237 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:58.898341 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:58.898734 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:59.398285 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:59.398370 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:59.398734 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:59.898309 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:59.898391 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:59.898719 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:00.414820 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:00.414906 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:00.415201 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:00.415247 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:00.899080 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:00.899160 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:00.899488 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:01.398203 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:01.398286 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:01.398658 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:01.898381 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:01.898453 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:01.898741 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:02.398760 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:02.398842 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:02.399198 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:02.898874 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:02.898953 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:02.899310 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:02.899364 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:03.399127 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:03.399199 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:03.399477 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:03.898183 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:03.898263 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:03.898574 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:04.398285 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:04.398363 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:04.398713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:04.898409 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:04.898488 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:04.898770 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:05.398281 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:05.398358 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:05.398689 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:05.398747 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:05.898283 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:05.898360 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:05.898696 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:06.398276 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:06.398344 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:06.398628 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:06.898700 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:06.898789 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:06.899156 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:07.399150 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:07.399230 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:07.399559 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:07.399618 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:07.898272 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:07.898347 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:07.898685 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:08.398266 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:08.398340 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:08.398691 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:08.898270 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:08.898346 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:08.898712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:09.398403 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:09.398475 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:09.398741 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:09.898260 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:09.898336 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:09.898699 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:09.898756 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:10.398423 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:10.398500 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:10.398892 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:10.898626 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:10.898722 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:10.899103 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:11.398911 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:11.399006 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:11.399384 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:11.898151 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:11.898224 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:11.898573 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:12.398258 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:12.398328 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:12.398616 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:12.398695 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:12.898253 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:12.898331 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:12.898683 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:13.398383 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:13.398463 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:13.398838 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:13.898531 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:13.898612 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:13.898894 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:14.398278 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:14.398350 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:14.398713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:14.398765 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:14.898300 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:14.898380 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:14.898747 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:15.398438 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:15.398508 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:15.398778 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:15.898259 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:15.898333 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:15.898664 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:16.398285 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:16.398368 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:16.398712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:16.898532 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:16.898606 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:16.898878 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:16.898924 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:17.398589 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:17.398661 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:17.398959 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:17.898673 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:17.898753 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:17.899078 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:18.398855 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:18.398925 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:18.399198 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:18.898973 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:18.899048 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:18.899383 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:18.899438 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:19.399095 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:19.399174 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:19.399532 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:19.898245 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:19.898323 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:19.898607 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:20.398269 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:20.398343 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:20.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:20.898294 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:20.898374 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:20.898722 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:21.398418 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:21.398486 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:21.398764 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:21.398806 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:21.898283 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:21.898369 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:21.898714 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:22.398289 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:22.398365 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:22.398713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:22.898223 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:22.898294 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:22.898644 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:23.398337 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:23.398411 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:23.398738 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:23.898470 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:23.898573 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:23.898929 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:23.898986 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:24.398626 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:24.398696 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:24.398974 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:24.898302 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:24.898387 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:24.898855 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:25.398279 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:25.398361 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:25.398718 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:25.898396 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:25.898463 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:25.898752 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:26.398331 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:26.398440 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:26.398776 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:26.398836 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:26.898830 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:26.898904 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:26.899295 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:27.399107 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:27.399188 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:27.399497 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:27.898182 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:27.898260 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:27.898590 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:28.398292 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:28.398373 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:28.398733 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:28.898316 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:28.898394 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:28.898672 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:28.898717 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:29.398311 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:29.398408 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:29.398849 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:29.898311 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:29.898399 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:29.898772 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:30.398270 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:30.398340 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:30.398653 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:30.898259 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:30.898340 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:30.898667 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:31.398286 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:31.398365 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:31.398695 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:31.398752 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:31.898305 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:31.898393 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:31.898777 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:32.398810 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:32.398883 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:32.399201 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:32.899041 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:32.899121 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:32.899453 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:33.398148 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:33.398223 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:33.398492 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:33.898278 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:33.898353 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:33.898736 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:33.898787 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:34.398447 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:34.398528 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:34.398873 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:34.898221 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:34.898312 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:34.898605 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:35.398303 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:35.398382 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:35.398734 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:35.898472 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:35.898554 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:35.898882 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:35.898940 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:36.398373 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:36.398454 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:36.398749 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:36.898854 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:36.898926 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:36.899222 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:37.398175 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:37.398272 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:37.398626 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:37.898231 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:37.898296 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:37.898642 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:38.398279 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:38.398350 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:38.398708 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:38.398766 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:38.898476 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:38.898554 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:38.898890 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:39.398379 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:39.398485 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:39.398800 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:39.898316 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:39.898391 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:39.898769 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:40.398507 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:40.398604 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:40.398907 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:40.398953 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:40.898245 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:40.898335 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:40.898635 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:41.398325 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:41.398402 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:41.398863 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:41.898282 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:41.898365 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:41.898709 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:42.398319 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:42.398385 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:42.398670 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:42.898305 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:42.898377 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:42.898704 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:42.898763 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:43.398283 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:43.398356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:43.398701 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:43.898384 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:43.898461 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:43.898733 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:44.398250 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:44.398345 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:44.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:44.898255 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:44.898335 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:44.898712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:45.398244 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:45.398321 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:45.398663 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:45.398717 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:45.898312 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:45.898398 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:45.898773 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:46.398512 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:46.398593 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:46.398928 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:46.898755 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:46.898837 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:46.899103 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:47.399074 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:47.399155 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:47.399470 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:47.399520 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:47.898232 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:47.898309 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:47.898683 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:48.398447 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:48.398547 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:48.398895 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:48.898281 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:48.898356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:48.898703 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:49.398425 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:49.398500 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:49.398876 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:49.898573 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:49.898645 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:49.899024 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:49.899073 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:50.398808 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:50.398884 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:50.399215 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:50.898894 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:50.898974 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:50.899314 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:51.399073 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:51.399145 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:51.399405 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:51.899204 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:51.899286 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:51.899637 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:51.899692 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:52.398394 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:52.398470 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:52.398814 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:52.898245 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:52.898334 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:52.898628 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:53.398281 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:53.398365 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:53.398736 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:53.898467 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:53.898549 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:53.898914 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:54.398587 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:54.398670 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:54.398930 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:54.398971 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:54.898283 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:54.898362 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:54.898706 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:55.398429 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:55.398501 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:55.398821 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:55.898239 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:55.898308 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:55.898643 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:56.398282 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:56.398367 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:56.398726 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:56.898588 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:56.898668 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:56.899021 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:56.899088 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:57.398828 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:57.398910 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:57.399188 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:57.898996 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:57.899073 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:57.899382 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:58.399133 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:58.399235 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:58.399594 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:58.898219 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:58.898452 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:58.898861 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:59.398261 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:59.398348 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:59.398686 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:59.398752 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:59.898268 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:59.898357 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:59.898715 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:00.399357 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:00.399435 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:00.399772 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:00.898475 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:00.898558 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:00.898912 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:01.398629 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:01.398704 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:01.399062 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:01.399123 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:01.898881 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:01.898960 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:01.899233 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:02.399234 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:02.399313 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:02.399704 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:02.898296 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:02.898382 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:02.898715 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:03.398263 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:03.398346 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:03.398641 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:03.898276 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:03.898359 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:03.898695 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:03.898751 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:04.398291 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:04.398413 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:04.398743 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:04.898440 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:04.898518 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:04.898790 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:05.398493 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:05.398570 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:05.398895 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:05.898635 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:05.898712 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:05.899049 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:05.899102 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:06.398845 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:06.398927 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:06.399275 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:06.899212 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:06.899287 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:06.899619 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:07.398278 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:07.398388 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:07.398739 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:07.898423 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:07.898501 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:07.898769 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:08.398271 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:08.398361 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:08.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:08.398759 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:08.898430 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:08.898507 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:08.898855 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:09.398214 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:09.398290 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:09.398601 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:09.898266 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:09.898350 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:09.898705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:10.398295 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:10.398377 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:10.398707 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:10.898349 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:10.898425 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:10.898702 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:10.898757 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:11.398292 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:11.398366 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:11.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:11.898435 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:11.898509 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:11.898839 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:12.398738 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:12.398804 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:12.399069 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:12.898825 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:12.898900 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:12.899217 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:12.899278 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:13.399064 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:13.399138 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:13.399479 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:13.898174 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:13.898254 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:13.898539 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:14.398296 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:14.398371 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:14.398712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:14.898437 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:14.898518 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:14.898877 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:15.398539 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:15.398617 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:15.398894 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:15.398947 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:15.898294 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:15.898402 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:15.898784 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:16.398330 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:16.398408 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:16.398731 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:16.898535 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:16.898609 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:16.898886 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:17.398882 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:17.398955 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:17.399291 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:17.399351 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:17.899139 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:17.899220 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:17.899551 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:18.398232 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:18.398362 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:18.398620 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:18.898277 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:18.898354 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:18.898649 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:19.398247 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:19.398346 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:19.398683 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:19.898387 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:19.898473 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:19.898758 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:19.898804 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:20.398296 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:20.398369 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:20.398689 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:20.898334 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:20.898415 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:20.898762 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:21.398456 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:21.398532 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:21.398795 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:21.898309 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:21.898383 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:21.898723 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:22.398748 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:22.398819 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:22.399287 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:22.399332 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:22.899045 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:22.899124 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:22.899438 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:23.398179 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:23.398299 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:23.398688 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:23.898298 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:23.898382 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:23.898729 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:24.398222 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:24.398296 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:24.398629 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:24.898320 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:24.898394 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:24.898747 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:24.898810 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:25.398296 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:25.398380 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:25.398720 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:25.898403 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:25.898472 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:25.898736 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:26.398281 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:26.398355 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:26.398676 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:26.898649 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:26.898727 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:26.899069 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:26.899125 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:27.398556 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:27.398654 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:27.398964 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:27.898756 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:27.898845 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:27.899194 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:28.398978 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:28.399057 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:28.399387 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:28.899171 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:28.899242 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:28.899511 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:28.899553 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:29.398265 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:29.398345 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:29.398698 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:29.898282 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:29.898357 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:29.898723 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:30.398293 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:30.398372 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:30.398656 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:30.898379 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:30.898467 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:30.898858 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:31.398431 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:31.398506 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:31.398844 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:31.398900 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:31.898545 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:31.898622 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:31.898916 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:32.398834 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:32.398911 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:32.399252 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:32.899021 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:32.899098 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:32.899424 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:33.398133 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:33.398202 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:33.398473 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:33.898147 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:33.898235 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:33.898584 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:33.898642 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:34.398163 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:34.398242 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:34.398591 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:34.898191 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:34.898275 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:34.898568 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:35.398271 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:35.398363 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:35.398707 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:35.898320 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:35.898407 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:35.898755 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:35.898810 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:36.398446 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:36.398521 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:36.398786 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:36.898729 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:36.898812 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:36.899129 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:37.399112 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:37.399185 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:37.399511 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:37.898225 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:37.898304 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:37.898568 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:38.398267 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:38.398343 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:38.398710 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:38.398764 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:38.898279 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:38.898353 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:38.898729 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:39.398240 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:39.398351 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:39.398667 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:39.898287 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:39.898369 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:39.898673 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:40.398360 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:40.398435 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:40.398766 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:40.398819 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:40.898241 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:40.898314 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:40.898637 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:41.398298 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:41.398376 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:41.398683 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:41.898412 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:41.898487 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:41.898821 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:42.398242 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:42.398318 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:42.398580 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:42.898280 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:42.898355 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:42.898692 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:42.898748 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:43.398416 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:43.398491 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:43.398846 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:43.898235 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:43.898329 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:43.898615 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:44.398291 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:44.398366 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:44.398722 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:44.898411 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:44.898483 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:44.898775 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:44.898824 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:45.398248 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:45.398345 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:45.398675 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:45.898365 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:45.898459 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:45.898837 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:46.398280 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:46.398361 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:46.398716 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:46.898502 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:46.898576 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:46.898840 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:46.898879 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:47.398781 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:47.398852 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:47.399176 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:47.898950 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:47.899024 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:47.899371 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:48.399121 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:48.399194 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:48.399456 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:48.899245 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:48.899322 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:48.899641 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:48.899693 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:49.398288 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:49.398370 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:49.398748 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:49.898250 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:49.898327 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:49.898652 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:50.398271 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:50.398347 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:50.398703 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:50.898421 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:50.898500 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:50.898849 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:51.398536 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:51.398624 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:51.398900 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:51.398944 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:51.898273 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:51.898353 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:51.898689 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:52.398314 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:52.398399 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:52.398737 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:52.898253 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:52.898349 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:52.898676 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:53.398281 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:53.398352 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:53.398708 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:53.898301 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:53.898380 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:53.898717 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:53.898780 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:54.398263 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:54.398358 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:54.398690 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:54.898290 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:54.898368 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:54.898745 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:55.398460 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:55.398541 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:55.398872 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:55.898241 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:55.898317 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:55.898573 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:56.398264 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:56.398341 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:56.398665 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:56.398721 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:56.898737 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:56.898816 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:56.899137 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:57.399000 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:57.399068 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:57.399335 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:57.899058 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:57.899134 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:57.899469 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:58.398223 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:58.398317 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:58.398690 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:58.398749 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:58.898385 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:58.898460 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:58.898722 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:59.398260 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:59.398337 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:59.398712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:59.898268 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:59.898343 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:59.898667 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:00.398403 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:00.398481 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:00.398778 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:00.398824 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:00.898298 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:00.898373 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:00.898697 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:01.398432 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:01.398511 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:01.398865 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:01.898261 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:01.898333 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:01.898600 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:02.398363 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:02.398458 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:02.398848 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:02.398903 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:02.898598 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:02.898677 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:02.899033 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:03.398801 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:03.398882 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:03.399146 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:03.898939 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:03.899014 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:03.899351 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:04.399028 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:04.399109 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:04.399429 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:04.399479 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:04.898171 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:04.898241 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:04.898523 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:05.398299 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:05.398375 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:05.398691 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:05.898283 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:05.898372 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:05.898673 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:06.398257 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:06.398336 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:06.398612 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:06.898577 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:06.898653 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:06.899006 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:06.899062 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:07.398886 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:07.398973 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:07.399304 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:07.899089 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:07.899159 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:07.899439 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:08.399244 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:08.399316 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:08.399642 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:08.898339 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:08.898425 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:08.898755 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:09.398430 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:09.398498 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:09.398754 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:09.398796 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:09.898279 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:09.898378 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:09.898704 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:10.398393 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:10.398469 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:10.398815 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:10.898372 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:10.898442 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:10.898709 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:11.398311 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:11.398389 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:11.398776 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:11.398848 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:11.898377 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:11.898455 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:11.898804 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:12.398256 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:12.398324 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:12.398587 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:12.898265 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:12.898339 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:12.898691 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:13.398375 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:13.398449 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:13.398799 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:13.898228 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:13.898308 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:13.898581 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:13.898622 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:14.398260 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:14.398340 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:14.398658 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:14.898262 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:14.898344 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:14.898707 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:15.398332 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:15.398408 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:15.398675 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:15.898289 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:15.898368 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:15.898652 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:15.898699 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:16.398290 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:16.398365 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:16.398727 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:16.898702 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:16.898784 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:16.899056 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:17.398983 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:17.399055 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:17.399412 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:17.899241 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:17.899319 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:17.899615 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:17.899667 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:18.398328 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:18.398395 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:18.398676 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:18.898311 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:18.898389 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:18.898756 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:19.398447 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:19.398552 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:19.398855 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:19.898524 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:19.898598 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:19.898881 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:20.398260 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:20.398339 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:20.398672 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:20.398727 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:20.898288 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:20.898361 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:20.898685 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:21.398238 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:21.398309 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:21.398582 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:21.898316 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:21.898391 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:21.898740 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:22.398286 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:22.398366 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:22.398717 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:22.398773 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:22.898431 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:22.898499 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:22.898765 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:23.398284 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:23.398368 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:23.398729 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:23.898447 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:23.898524 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:23.898868 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:24.398560 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:24.398637 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:24.398927 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:24.398969 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:24.898283 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:24.898357 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:24.898696 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:25.398285 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:25.398368 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:25.398721 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:25.898222 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:25.898307 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:25.898627 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:26.398280 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:26.398362 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:26.398712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:26.898725 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:26.898800 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:26.899142 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:26.899196 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:27.398976 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:27.399052 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:27.399314 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:27.899092 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:27.899164 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:27.899471 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:28.398223 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:28.398299 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:28.398602 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:28.898256 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:28.898325 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:28.898655 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:29.398287 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:29.398360 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:29.398693 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:29.398750 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:29.898408 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:29.898505 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:29.898906 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:30.398225 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:30.398302 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:30.398631 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:30.898286 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:30.898375 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:30.898730 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:31.398439 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:31.398517 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:31.398856 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:31.398911 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:31.898555 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:31.898623 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:31.898889 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:32.398937 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:32.399013 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:32.399352 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:32.899143 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:32.899220 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:32.899571 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:33.398155 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:33.398227 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:33.398484 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:33.898182 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:33.898255 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:33.898595 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:33.898651 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:34.398324 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:34.398396 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:34.398738 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:34.898420 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:34.898491 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:34.898769 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:35.398292 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:35.398369 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:35.398658 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:35.898356 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:35.898432 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:35.898728 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:35.898819 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:36.398478 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:36.398549 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:36.398814 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:36.898859 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:36.898933 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:36.899273 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:37.399136 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:37.399213 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:37.399567 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:37.898258 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:37.898329 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:37.898588 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:38.398300 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:38.398379 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:38.398666 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:38.398713 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:38.898281 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:38.898356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:38.898708 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:39.398215 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:39.398283 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:39.398608 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:39.898292 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:39.898365 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:39.898735 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:40.398290 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:40.398419 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:40.398713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:40.398761 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:40.898223 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:40.898291 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:40.898631 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:41.398327 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:41.398405 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:41.398732 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:41.898307 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:41.898393 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:41.898757 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:42.398724 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:42.398796 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:42.399059 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:42.399111 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:42.898855 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:42.898936 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:42.899284 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:43.399100 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:43.399176 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:43.399519 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:43.898212 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:43.898287 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:43.898548 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:44.398253 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:44.398333 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:44.398697 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:44.898401 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:44.898475 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:44.898804 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:44.898860 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:45.398241 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:45.398315 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:45.398573 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:45.898329 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:45.898404 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:45.898750 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:46.398288 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:46.398359 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:46.398673 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:46.898698 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:46.898768 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:46.899039 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:46.899080 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:47.398977 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:47.399049 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:47.399400 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:47.899044 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:47.899122 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:47.899468 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:48.398202 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:48.398275 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:48.398540 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:48.898231 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:48.898304 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:48.898650 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:49.398232 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:49.398318 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:49.398653 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:49.398711 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:49.898340 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:49.898415 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:49.898682 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:50.398255 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:50.398337 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:50.398634 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:50.898338 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:50.898429 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:50.898764 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:51.398436 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:51.398506 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:51.398820 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:51.398875 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:51.898249 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:51.898343 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:51.898647 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:52.398329 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:52.398423 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:52.398786 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:52.898247 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:52.898320 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:52.898634 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:53.398288 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:53.398360 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:53.398691 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:53.898307 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:53.898414 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:53.898758 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:53.898813 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:54.398461 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:54.398534 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:54.398794 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:54.898301 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:54.898376 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:54.898766 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:55.398305 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:55.398390 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:55.398708 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:55.898252 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:55.898321 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:55.898601 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:56.398276 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:56.398353 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:56.398704 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:56.398769 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:56.898725 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:56.898806 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:56.899207 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:57.398957 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:57.399027 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:57.399310 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:57.899115 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:57.899188 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:57.899518 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:58.398225 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:58.398296 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:58.398611 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:58.898289 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:58.898363 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:58.898624 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:58.898670 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:59.398281 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:59.398361 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:59.398712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:59.898427 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:59.898517 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:59.898807 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:00.398278 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:00.398364 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:00.399475 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 06:38:00.898197 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:00.898269 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:00.898604 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:01.398343 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:01.398423 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:01.398732 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:01.398781 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:01.898309 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:01.898387 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:01.898662 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:02.398274 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:02.398348 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:02.398666 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:02.898354 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:02.898429 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:02.898739 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:03.398239 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:03.398307 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:03.398615 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:03.898236 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:03.898311 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:03.898646 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:03.898700 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:04.398254 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:04.398336 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:04.398687 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:04.898364 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:04.898443 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:04.898706 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:05.398258 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:05.398338 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:05.398679 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:05.898384 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:05.898464 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:05.898794 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:05.898848 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:06.398478 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:06.398546 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:06.398819 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:06.898821 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:06.898898 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:06.899244 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:07.399095 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:07.399177 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:07.399526 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:07.898233 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:07.898305 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:07.898583 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:08.398273 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:08.398355 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:08.398689 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:08.398747 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:08.898439 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:08.898512 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:08.898861 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:09.398318 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:09.398389 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:09.398662 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:09.898282 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:09.898371 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:09.898697 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:10.398289 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:10.398372 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:10.398704 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:10.898271 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:10.898351 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:10.898646 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:10.898697 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:11.398274 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:11.398356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:11.398699 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:11.898267 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:11.898346 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:11.898692 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:12.398257 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:12.398345 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:12.398686 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:12.898310 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:12.898387 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:12.898713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:12.898765 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:13.398455 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:13.398532 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:13.398909 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:13.898601 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:13.898682 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:13.899003 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:14.398291 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:14.398366 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:14.398694 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:14.898453 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:14.898549 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:14.898911 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:14.898969 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:15.398256 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:15.398338 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:15.398607 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:15.898340 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:15.898416 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:15.898765 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:16.398312 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:16.398390 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:16.398677 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:16.898563 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:16.898635 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:16.898893 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:17.398825 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:17.398897 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:17.399203 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:17.399251 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:17.899015 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:17.899092 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:17.899429 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:18.399192 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:18.399272 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:18.399543 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:18.898305 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:18.898380 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:18.898701 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:19.398329 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:19.398405 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:19.398708 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:19.898230 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:19.898303 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:19.898634 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:19.898691 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:20.398293 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:20.398368 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:20.398701 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:20.898295 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:20.898370 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:20.898697 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:21.398453 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:21.398559 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:21.398856 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:21.898276 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:21.898357 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:21.898729 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:21.898782 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:22.398291 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:22.398376 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:22.398740 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:22.898287 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:22.898360 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:22.898617 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:23.398307 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:23.398396 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:23.398750 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:23.898299 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:23.898375 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:23.898725 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:24.398289 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:24.398356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:24.398635 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:24.398676 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:24.898264 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:24.898338 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:24.898687 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:25.398443 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:25.398523 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:25.398874 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:25.898588 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:25.898660 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:25.898920 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:26.398605 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:26.398677 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:26.399010 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:26.399063 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:26.898789 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:26.898863 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:26.899190 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:27.400218 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:27.400306 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:27.400637 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:27.898246 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:27.898312 1633651 node_ready.go:38] duration metric: took 6m0.000267561s for node "functional-364120" to be "Ready" ...
	I1216 06:38:27.901509 1633651 out.go:203] 
	W1216 06:38:27.904340 1633651 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1216 06:38:27.904359 1633651 out.go:285] * 
	W1216 06:38:27.906499 1633651 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 06:38:27.909191 1633651 out.go:203] 
	
	
	==> CRI-O <==
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.932317848Z" level=info msg="Using the internal default seccomp profile"
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.932325314Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.932333569Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.932339411Z" level=info msg="RDT not available in the host system"
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.932352063Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.933182179Z" level=info msg="Conmon does support the --sync option"
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.933208198Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.933225937Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.933900902Z" level=info msg="Conmon does support the --sync option"
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.933921595Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.934056086Z" level=info msg="Updated default CNI network name to "
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.934625401Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oc
i/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_
memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_d
ir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [c
rio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.934995232Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.935049066Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.989476581Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.989512372Z" level=info msg="Starting seccomp notifier watcher"
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.989552889Z" level=info msg="Create NRI interface"
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.989649866Z" level=info msg="built-in NRI default validator is disabled"
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.989658424Z" level=info msg="runtime interface created"
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.989668697Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.989675409Z" level=info msg="runtime interface starting up..."
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.98968171Z" level=info msg="starting plugins..."
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.98969387Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 16 06:32:24 functional-364120 crio[5357]: time="2025-12-16T06:32:24.989753948Z" level=info msg="No systemd watchdog enabled"
	Dec 16 06:32:24 functional-364120 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:38:32.381507    8687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:38:32.382304    8687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:38:32.383880    8687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:38:32.384419    8687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:38:32.386126    8687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 06:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec16 06:13] overlayfs: idmapped layers are currently not supported
	[Dec16 06:19] overlayfs: idmapped layers are currently not supported
	[Dec16 06:20] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 06:38:32 up  9:21,  0 user,  load average: 0.34, 0.31, 0.78
	Linux functional-364120 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 06:38:29 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:38:30 functional-364120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1139.
	Dec 16 06:38:30 functional-364120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:38:30 functional-364120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:38:30 functional-364120 kubelet[8563]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:38:30 functional-364120 kubelet[8563]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:38:30 functional-364120 kubelet[8563]: E1216 06:38:30.466515    8563 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:38:30 functional-364120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:38:30 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:38:31 functional-364120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1140.
	Dec 16 06:38:31 functional-364120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:38:31 functional-364120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:38:31 functional-364120 kubelet[8584]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:38:31 functional-364120 kubelet[8584]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:38:31 functional-364120 kubelet[8584]: E1216 06:38:31.212202    8584 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:38:31 functional-364120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:38:31 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:38:31 functional-364120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1141.
	Dec 16 06:38:31 functional-364120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:38:31 functional-364120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:38:31 functional-364120 kubelet[8605]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:38:31 functional-364120 kubelet[8605]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:38:31 functional-364120 kubelet[8605]: E1216 06:38:31.971450    8605 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:38:31 functional-364120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:38:31 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-364120 -n functional-364120
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-364120 -n functional-364120: exit status 2 (382.990623ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-364120" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 kubectl -- --context functional-364120 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-364120 kubectl -- --context functional-364120 get pods: exit status 1 (124.24074ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-arm64 -p functional-364120 kubectl -- --context functional-364120 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-364120
helpers_test.go:244: (dbg) docker inspect functional-364120:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf",
	        "Created": "2025-12-16T06:24:05.281524036Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1628059,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T06:24:05.346294886Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/hostname",
	        "HostsPath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/hosts",
	        "LogPath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf-json.log",
	        "Name": "/functional-364120",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-364120:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-364120",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf",
	                "LowerDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752-init/diff:/var/lib/docker/overlay2/bf9e5e3f04a34ae52d17b5e81aeacb3854428b2bda7b4fcb7e1d86558db759ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752/merged",
	                "UpperDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752/diff",
	                "WorkDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-364120",
	                "Source": "/var/lib/docker/volumes/functional-364120/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-364120",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-364120",
	                "name.minikube.sigs.k8s.io": "functional-364120",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ca8e444af5ea4dc220aae407b23205e89ee2c7bfaf0d7da28c0fa8a6e9438a0b",
	            "SandboxKey": "/var/run/docker/netns/ca8e444af5ea",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34260"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34261"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34264"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34262"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34263"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-364120": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:28:ec:c3:f0:f5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a6847428577f52c75d7f6ab7a92b3395c1204da1608971d5af98d3898a2210da",
	                    "EndpointID": "e579fd8a0ba117da836073d37b7f617933568bedfc3fb52e056b4772aaddecbf",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-364120",
	                        "8e0dcfb5d015"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-364120 -n functional-364120
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-364120 -n functional-364120: exit status 2 (307.906463ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-364120 logs -n 25: (1.046290673s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-487532 image build -t localhost/my-image:functional-487532 testdata/build --alsologtostderr                                            │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image          │ functional-487532 image ls --format json --alsologtostderr                                                                                        │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image          │ functional-487532 image ls --format table --alsologtostderr                                                                                       │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ update-context │ functional-487532 update-context --alsologtostderr -v=2                                                                                           │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ update-context │ functional-487532 update-context --alsologtostderr -v=2                                                                                           │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ update-context │ functional-487532 update-context --alsologtostderr -v=2                                                                                           │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image          │ functional-487532 image ls                                                                                                                        │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ delete         │ -p functional-487532                                                                                                                              │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:24 UTC │
	│ start          │ -p functional-364120 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:24 UTC │                     │
	│ start          │ -p functional-364120 --alsologtostderr -v=8                                                                                                       │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:32 UTC │                     │
	│ cache          │ functional-364120 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ cache          │ functional-364120 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ cache          │ functional-364120 cache add registry.k8s.io/pause:latest                                                                                          │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ cache          │ functional-364120 cache add minikube-local-cache-test:functional-364120                                                                           │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ cache          │ functional-364120 cache delete minikube-local-cache-test:functional-364120                                                                        │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ cache          │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ ssh            │ functional-364120 ssh sudo crictl images                                                                                                          │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ ssh            │ functional-364120 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ ssh            │ functional-364120 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │                     │
	│ cache          │ functional-364120 cache reload                                                                                                                    │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ ssh            │ functional-364120 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ kubectl        │ functional-364120 kubectl -- --context functional-364120 get pods                                                                                 │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 06:32:21
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 06:32:21.945678 1633651 out.go:360] Setting OutFile to fd 1 ...
	I1216 06:32:21.945884 1633651 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:32:21.945913 1633651 out.go:374] Setting ErrFile to fd 2...
	I1216 06:32:21.945938 1633651 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:32:21.946236 1633651 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 06:32:21.946683 1633651 out.go:368] Setting JSON to false
	I1216 06:32:21.947701 1633651 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":33293,"bootTime":1765833449,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1216 06:32:21.947809 1633651 start.go:143] virtualization:  
	I1216 06:32:21.951426 1633651 out.go:179] * [functional-364120] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 06:32:21.955191 1633651 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 06:32:21.955256 1633651 notify.go:221] Checking for updates...
	I1216 06:32:21.958173 1633651 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 06:32:21.961154 1633651 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:32:21.964261 1633651 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube
	I1216 06:32:21.967271 1633651 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 06:32:21.970206 1633651 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 06:32:21.973784 1633651 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 06:32:21.973958 1633651 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 06:32:22.008677 1633651 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 06:32:22.008820 1633651 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:32:22.071471 1633651 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 06:32:22.061898568 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 06:32:22.071599 1633651 docker.go:319] overlay module found
	I1216 06:32:22.074586 1633651 out.go:179] * Using the docker driver based on existing profile
	I1216 06:32:22.077482 1633651 start.go:309] selected driver: docker
	I1216 06:32:22.077504 1633651 start.go:927] validating driver "docker" against &{Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:32:22.077607 1633651 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 06:32:22.077718 1633651 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:32:22.133247 1633651 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 06:32:22.124039104 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 06:32:22.133687 1633651 cni.go:84] Creating CNI manager for ""
	I1216 06:32:22.133753 1633651 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 06:32:22.133810 1633651 start.go:353] cluster config:
	{Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:32:22.136881 1633651 out.go:179] * Starting "functional-364120" primary control-plane node in "functional-364120" cluster
	I1216 06:32:22.139682 1633651 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 06:32:22.142506 1633651 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 06:32:22.145532 1633651 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 06:32:22.145589 1633651 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1216 06:32:22.145600 1633651 cache.go:65] Caching tarball of preloaded images
	I1216 06:32:22.145641 1633651 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 06:32:22.145690 1633651 preload.go:238] Found /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 06:32:22.145701 1633651 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1216 06:32:22.145813 1633651 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/config.json ...
	I1216 06:32:22.165180 1633651 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 06:32:22.165200 1633651 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 06:32:22.165222 1633651 cache.go:243] Successfully downloaded all kic artifacts
	I1216 06:32:22.165256 1633651 start.go:360] acquireMachinesLock for functional-364120: {Name:mkbf042218fd4d1baa11f8b1e4a71170f4ad9912 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:32:22.165333 1633651 start.go:364] duration metric: took 48.796µs to acquireMachinesLock for "functional-364120"
	I1216 06:32:22.165354 1633651 start.go:96] Skipping create...Using existing machine configuration
	I1216 06:32:22.165360 1633651 fix.go:54] fixHost starting: 
	I1216 06:32:22.165613 1633651 cli_runner.go:164] Run: docker container inspect functional-364120 --format={{.State.Status}}
	I1216 06:32:22.182587 1633651 fix.go:112] recreateIfNeeded on functional-364120: state=Running err=<nil>
	W1216 06:32:22.182616 1633651 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 06:32:22.185776 1633651 out.go:252] * Updating the running docker "functional-364120" container ...
	I1216 06:32:22.185814 1633651 machine.go:94] provisionDockerMachine start ...
	I1216 06:32:22.185896 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:22.204643 1633651 main.go:143] libmachine: Using SSH client type: native
	I1216 06:32:22.205060 1633651 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1216 06:32:22.205076 1633651 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 06:32:22.340733 1633651 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-364120
	
	I1216 06:32:22.340761 1633651 ubuntu.go:182] provisioning hostname "functional-364120"
	I1216 06:32:22.340833 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:22.359374 1633651 main.go:143] libmachine: Using SSH client type: native
	I1216 06:32:22.359683 1633651 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1216 06:32:22.359701 1633651 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-364120 && echo "functional-364120" | sudo tee /etc/hostname
	I1216 06:32:22.513698 1633651 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-364120
	
	I1216 06:32:22.513777 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:22.532110 1633651 main.go:143] libmachine: Using SSH client type: native
	I1216 06:32:22.532428 1633651 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1216 06:32:22.532445 1633651 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-364120' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-364120/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-364120' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 06:32:22.668828 1633651 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 06:32:22.668856 1633651 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-1596013/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-1596013/.minikube}
	I1216 06:32:22.668881 1633651 ubuntu.go:190] setting up certificates
	I1216 06:32:22.668900 1633651 provision.go:84] configureAuth start
	I1216 06:32:22.668975 1633651 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-364120
	I1216 06:32:22.686750 1633651 provision.go:143] copyHostCerts
	I1216 06:32:22.686794 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 06:32:22.686839 1633651 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem, removing ...
	I1216 06:32:22.686850 1633651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 06:32:22.686924 1633651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem (1675 bytes)
	I1216 06:32:22.687014 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 06:32:22.687038 1633651 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem, removing ...
	I1216 06:32:22.687049 1633651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 06:32:22.687078 1633651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem (1078 bytes)
	I1216 06:32:22.687125 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 06:32:22.687146 1633651 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem, removing ...
	I1216 06:32:22.687154 1633651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 06:32:22.687181 1633651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem (1123 bytes)
	I1216 06:32:22.687234 1633651 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem org=jenkins.functional-364120 san=[127.0.0.1 192.168.49.2 functional-364120 localhost minikube]
	I1216 06:32:22.948191 1633651 provision.go:177] copyRemoteCerts
	I1216 06:32:22.948261 1633651 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 06:32:22.948301 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:22.965164 1633651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:32:23.060207 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1216 06:32:23.060306 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 06:32:23.077647 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1216 06:32:23.077712 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 06:32:23.095215 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1216 06:32:23.095292 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 06:32:23.112813 1633651 provision.go:87] duration metric: took 443.895655ms to configureAuth
	I1216 06:32:23.112841 1633651 ubuntu.go:206] setting minikube options for container-runtime
	I1216 06:32:23.113039 1633651 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 06:32:23.113160 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:23.130832 1633651 main.go:143] libmachine: Using SSH client type: native
	I1216 06:32:23.131171 1633651 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1216 06:32:23.131200 1633651 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 06:32:23.456336 1633651 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 06:32:23.456407 1633651 machine.go:97] duration metric: took 1.270583728s to provisionDockerMachine
	I1216 06:32:23.456430 1633651 start.go:293] postStartSetup for "functional-364120" (driver="docker")
	I1216 06:32:23.456444 1633651 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 06:32:23.456549 1633651 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 06:32:23.456623 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:23.474584 1633651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:32:23.572573 1633651 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 06:32:23.576065 1633651 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1216 06:32:23.576089 1633651 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1216 06:32:23.576094 1633651 command_runner.go:130] > VERSION_ID="12"
	I1216 06:32:23.576099 1633651 command_runner.go:130] > VERSION="12 (bookworm)"
	I1216 06:32:23.576104 1633651 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1216 06:32:23.576107 1633651 command_runner.go:130] > ID=debian
	I1216 06:32:23.576111 1633651 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1216 06:32:23.576116 1633651 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1216 06:32:23.576121 1633651 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1216 06:32:23.576161 1633651 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 06:32:23.576184 1633651 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 06:32:23.576195 1633651 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/addons for local assets ...
	I1216 06:32:23.576257 1633651 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/files for local assets ...
	I1216 06:32:23.576334 1633651 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> 15992552.pem in /etc/ssl/certs
	I1216 06:32:23.576345 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> /etc/ssl/certs/15992552.pem
	I1216 06:32:23.576419 1633651 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/test/nested/copy/1599255/hosts -> hosts in /etc/test/nested/copy/1599255
	I1216 06:32:23.576428 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/test/nested/copy/1599255/hosts -> /etc/test/nested/copy/1599255/hosts
	I1216 06:32:23.576497 1633651 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1599255
	I1216 06:32:23.584272 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 06:32:23.602073 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/test/nested/copy/1599255/hosts --> /etc/test/nested/copy/1599255/hosts (40 bytes)
	I1216 06:32:23.620211 1633651 start.go:296] duration metric: took 163.749097ms for postStartSetup
	I1216 06:32:23.620332 1633651 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 06:32:23.620393 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:23.637607 1633651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:32:23.729817 1633651 command_runner.go:130] > 11%
	I1216 06:32:23.729920 1633651 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 06:32:23.734460 1633651 command_runner.go:130] > 173G
	I1216 06:32:23.734888 1633651 fix.go:56] duration metric: took 1.569523929s for fixHost
	I1216 06:32:23.734910 1633651 start.go:83] releasing machines lock for "functional-364120", held for 1.569567934s
	I1216 06:32:23.734992 1633651 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-364120
	I1216 06:32:23.753392 1633651 ssh_runner.go:195] Run: cat /version.json
	I1216 06:32:23.753419 1633651 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 06:32:23.753445 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:23.753482 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:23.775365 1633651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:32:23.776190 1633651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:32:23.872489 1633651 command_runner.go:130] > {"iso_version": "v1.37.0-1765579389-22117", "kicbase_version": "v0.0.48-1765661130-22141", "minikube_version": "v1.37.0", "commit": "cbb33128a244032d08f8fc6e6c9f03b30f0da3e4"}
	I1216 06:32:23.964085 1633651 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1216 06:32:23.966949 1633651 ssh_runner.go:195] Run: systemctl --version
	I1216 06:32:23.972881 1633651 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1216 06:32:23.972927 1633651 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1216 06:32:23.973332 1633651 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 06:32:24.017041 1633651 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1216 06:32:24.021688 1633651 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1216 06:32:24.021875 1633651 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 06:32:24.021943 1633651 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 06:32:24.030849 1633651 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 06:32:24.030874 1633651 start.go:496] detecting cgroup driver to use...
	I1216 06:32:24.030909 1633651 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:32:24.030973 1633651 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 06:32:24.046872 1633651 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:32:24.060299 1633651 docker.go:218] disabling cri-docker service (if available) ...
	I1216 06:32:24.060392 1633651 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 06:32:24.076826 1633651 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 06:32:24.090325 1633651 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 06:32:24.210022 1633651 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 06:32:24.329836 1633651 docker.go:234] disabling docker service ...
	I1216 06:32:24.329935 1633651 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 06:32:24.345813 1633651 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 06:32:24.359799 1633651 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 06:32:24.482084 1633651 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 06:32:24.592216 1633651 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 06:32:24.607323 1633651 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 06:32:24.620059 1633651 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1216 06:32:24.621570 1633651 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 06:32:24.621685 1633651 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:32:24.630471 1633651 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 06:32:24.630583 1633651 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:32:24.638917 1633651 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:32:24.647722 1633651 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:32:24.656274 1633651 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 06:32:24.664335 1633651 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:32:24.674249 1633651 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:32:24.682423 1633651 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:32:24.691805 1633651 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 06:32:24.699096 1633651 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1216 06:32:24.700134 1633651 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 06:32:24.707996 1633651 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:32:24.828004 1633651 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 06:32:24.995020 1633651 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 06:32:24.995147 1633651 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 06:32:24.998673 1633651 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1216 06:32:24.998710 1633651 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1216 06:32:24.998717 1633651 command_runner.go:130] > Device: 0,73	Inode: 1638        Links: 1
	I1216 06:32:24.998724 1633651 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1216 06:32:24.998732 1633651 command_runner.go:130] > Access: 2025-12-16 06:32:24.929681899 +0000
	I1216 06:32:24.998737 1633651 command_runner.go:130] > Modify: 2025-12-16 06:32:24.929681899 +0000
	I1216 06:32:24.998743 1633651 command_runner.go:130] > Change: 2025-12-16 06:32:24.929681899 +0000
	I1216 06:32:24.998747 1633651 command_runner.go:130] >  Birth: -
	I1216 06:32:24.999054 1633651 start.go:564] Will wait 60s for crictl version
	I1216 06:32:24.999171 1633651 ssh_runner.go:195] Run: which crictl
	I1216 06:32:25.003803 1633651 command_runner.go:130] > /usr/local/bin/crictl
	I1216 06:32:25.003920 1633651 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 06:32:25.030365 1633651 command_runner.go:130] > Version:  0.1.0
	I1216 06:32:25.030401 1633651 command_runner.go:130] > RuntimeName:  cri-o
	I1216 06:32:25.030407 1633651 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1216 06:32:25.030415 1633651 command_runner.go:130] > RuntimeApiVersion:  v1
	I1216 06:32:25.032653 1633651 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 06:32:25.032766 1633651 ssh_runner.go:195] Run: crio --version
	I1216 06:32:25.062220 1633651 command_runner.go:130] > crio version 1.34.3
	I1216 06:32:25.062244 1633651 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1216 06:32:25.062252 1633651 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1216 06:32:25.062258 1633651 command_runner.go:130] >    GitTreeState:   dirty
	I1216 06:32:25.062271 1633651 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1216 06:32:25.062277 1633651 command_runner.go:130] >    GoVersion:      go1.24.6
	I1216 06:32:25.062281 1633651 command_runner.go:130] >    Compiler:       gc
	I1216 06:32:25.062287 1633651 command_runner.go:130] >    Platform:       linux/arm64
	I1216 06:32:25.062295 1633651 command_runner.go:130] >    Linkmode:       static
	I1216 06:32:25.062298 1633651 command_runner.go:130] >    BuildTags:
	I1216 06:32:25.062306 1633651 command_runner.go:130] >      static
	I1216 06:32:25.062310 1633651 command_runner.go:130] >      netgo
	I1216 06:32:25.062314 1633651 command_runner.go:130] >      osusergo
	I1216 06:32:25.062318 1633651 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1216 06:32:25.062324 1633651 command_runner.go:130] >      seccomp
	I1216 06:32:25.062328 1633651 command_runner.go:130] >      apparmor
	I1216 06:32:25.062335 1633651 command_runner.go:130] >      selinux
	I1216 06:32:25.062355 1633651 command_runner.go:130] >    LDFlags:          unknown
	I1216 06:32:25.062366 1633651 command_runner.go:130] >    SeccompEnabled:   true
	I1216 06:32:25.062371 1633651 command_runner.go:130] >    AppArmorEnabled:  false
	I1216 06:32:25.062783 1633651 ssh_runner.go:195] Run: crio --version
	I1216 06:32:25.091083 1633651 command_runner.go:130] > crio version 1.34.3
	I1216 06:32:25.091135 1633651 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1216 06:32:25.091142 1633651 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1216 06:32:25.091169 1633651 command_runner.go:130] >    GitTreeState:   dirty
	I1216 06:32:25.091182 1633651 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1216 06:32:25.091188 1633651 command_runner.go:130] >    GoVersion:      go1.24.6
	I1216 06:32:25.091193 1633651 command_runner.go:130] >    Compiler:       gc
	I1216 06:32:25.091205 1633651 command_runner.go:130] >    Platform:       linux/arm64
	I1216 06:32:25.091210 1633651 command_runner.go:130] >    Linkmode:       static
	I1216 06:32:25.091218 1633651 command_runner.go:130] >    BuildTags:
	I1216 06:32:25.091223 1633651 command_runner.go:130] >      static
	I1216 06:32:25.091226 1633651 command_runner.go:130] >      netgo
	I1216 06:32:25.091230 1633651 command_runner.go:130] >      osusergo
	I1216 06:32:25.091244 1633651 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1216 06:32:25.091254 1633651 command_runner.go:130] >      seccomp
	I1216 06:32:25.091262 1633651 command_runner.go:130] >      apparmor
	I1216 06:32:25.091274 1633651 command_runner.go:130] >      selinux
	I1216 06:32:25.091278 1633651 command_runner.go:130] >    LDFlags:          unknown
	I1216 06:32:25.091282 1633651 command_runner.go:130] >    SeccompEnabled:   true
	I1216 06:32:25.091286 1633651 command_runner.go:130] >    AppArmorEnabled:  false
	I1216 06:32:25.097058 1633651 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1216 06:32:25.100055 1633651 cli_runner.go:164] Run: docker network inspect functional-364120 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 06:32:25.116990 1633651 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 06:32:25.121062 1633651 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1216 06:32:25.121217 1633651 kubeadm.go:884] updating cluster {Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 06:32:25.121338 1633651 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 06:32:25.121400 1633651 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 06:32:25.161132 1633651 command_runner.go:130] > {
	I1216 06:32:25.161156 1633651 command_runner.go:130] >   "images":  [
	I1216 06:32:25.161162 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161171 1633651 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1216 06:32:25.161176 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161183 1633651 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1216 06:32:25.161197 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161202 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161212 1633651 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1216 06:32:25.161220 1633651 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1216 06:32:25.161224 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161229 1633651 command_runner.go:130] >       "size":  "111333938",
	I1216 06:32:25.161237 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.161245 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.161248 1633651 command_runner.go:130] >     },
	I1216 06:32:25.161253 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161267 1633651 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1216 06:32:25.161272 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161278 1633651 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1216 06:32:25.161289 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161295 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161303 1633651 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1216 06:32:25.161313 1633651 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1216 06:32:25.161317 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161325 1633651 command_runner.go:130] >       "size":  "29037500",
	I1216 06:32:25.161333 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.161342 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.161350 1633651 command_runner.go:130] >     },
	I1216 06:32:25.161353 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161360 1633651 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1216 06:32:25.161368 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161373 1633651 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1216 06:32:25.161376 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161380 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161388 1633651 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1216 06:32:25.161400 1633651 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1216 06:32:25.161403 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161408 1633651 command_runner.go:130] >       "size":  "74491780",
	I1216 06:32:25.161415 1633651 command_runner.go:130] >       "username":  "nonroot",
	I1216 06:32:25.161424 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.161431 1633651 command_runner.go:130] >     },
	I1216 06:32:25.161435 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161442 1633651 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1216 06:32:25.161450 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161456 1633651 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1216 06:32:25.161459 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161469 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161477 1633651 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1216 06:32:25.161485 1633651 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1216 06:32:25.161489 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161493 1633651 command_runner.go:130] >       "size":  "60857170",
	I1216 06:32:25.161499 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.161511 1633651 command_runner.go:130] >         "value":  "0"
	I1216 06:32:25.161514 1633651 command_runner.go:130] >       },
	I1216 06:32:25.161529 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.161540 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.161544 1633651 command_runner.go:130] >     },
	I1216 06:32:25.161554 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161567 1633651 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1216 06:32:25.161571 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161578 1633651 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1216 06:32:25.161582 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161588 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161601 1633651 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1216 06:32:25.161614 1633651 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1216 06:32:25.161618 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161623 1633651 command_runner.go:130] >       "size":  "84949999",
	I1216 06:32:25.161631 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.161636 1633651 command_runner.go:130] >         "value":  "0"
	I1216 06:32:25.161639 1633651 command_runner.go:130] >       },
	I1216 06:32:25.161643 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.161647 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.161667 1633651 command_runner.go:130] >     },
	I1216 06:32:25.161675 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161682 1633651 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1216 06:32:25.161686 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161692 1633651 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1216 06:32:25.161701 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161705 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161714 1633651 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1216 06:32:25.161726 1633651 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1216 06:32:25.161730 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161734 1633651 command_runner.go:130] >       "size":  "72170325",
	I1216 06:32:25.161738 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.161743 1633651 command_runner.go:130] >         "value":  "0"
	I1216 06:32:25.161748 1633651 command_runner.go:130] >       },
	I1216 06:32:25.161753 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.161758 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.161761 1633651 command_runner.go:130] >     },
	I1216 06:32:25.161764 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161771 1633651 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1216 06:32:25.161779 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161785 1633651 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1216 06:32:25.161788 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161793 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161801 1633651 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1216 06:32:25.161814 1633651 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1216 06:32:25.161818 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161822 1633651 command_runner.go:130] >       "size":  "74106775",
	I1216 06:32:25.161826 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.161830 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.161836 1633651 command_runner.go:130] >     },
	I1216 06:32:25.161839 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161846 1633651 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1216 06:32:25.161850 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161863 1633651 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1216 06:32:25.161870 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161874 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161882 1633651 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1216 06:32:25.161905 1633651 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1216 06:32:25.161913 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161918 1633651 command_runner.go:130] >       "size":  "49822549",
	I1216 06:32:25.161921 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.161925 1633651 command_runner.go:130] >         "value":  "0"
	I1216 06:32:25.161929 1633651 command_runner.go:130] >       },
	I1216 06:32:25.161933 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.161937 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.161943 1633651 command_runner.go:130] >     },
	I1216 06:32:25.161947 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161956 1633651 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1216 06:32:25.161960 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161965 1633651 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1216 06:32:25.161971 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161975 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161995 1633651 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1216 06:32:25.162003 1633651 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1216 06:32:25.162006 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.162010 1633651 command_runner.go:130] >       "size":  "519884",
	I1216 06:32:25.162013 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.162017 1633651 command_runner.go:130] >         "value":  "65535"
	I1216 06:32:25.162020 1633651 command_runner.go:130] >       },
	I1216 06:32:25.162029 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.162036 1633651 command_runner.go:130] >       "pinned":  true
	I1216 06:32:25.162040 1633651 command_runner.go:130] >     }
	I1216 06:32:25.162043 1633651 command_runner.go:130] >   ]
	I1216 06:32:25.162046 1633651 command_runner.go:130] > }
	I1216 06:32:25.162230 1633651 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 06:32:25.162244 1633651 crio.go:433] Images already preloaded, skipping extraction
	I1216 06:32:25.162311 1633651 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 06:32:25.189040 1633651 command_runner.go:130] > {
	I1216 06:32:25.189061 1633651 command_runner.go:130] >   "images":  [
	I1216 06:32:25.189066 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189085 1633651 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1216 06:32:25.189090 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189096 1633651 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1216 06:32:25.189100 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189103 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189112 1633651 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1216 06:32:25.189120 1633651 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1216 06:32:25.189125 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189133 1633651 command_runner.go:130] >       "size":  "111333938",
	I1216 06:32:25.189141 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.189146 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.189157 1633651 command_runner.go:130] >     },
	I1216 06:32:25.189161 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189168 1633651 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1216 06:32:25.189171 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189177 1633651 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1216 06:32:25.189180 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189184 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189193 1633651 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1216 06:32:25.189201 1633651 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1216 06:32:25.189204 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189208 1633651 command_runner.go:130] >       "size":  "29037500",
	I1216 06:32:25.189212 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.189217 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.189220 1633651 command_runner.go:130] >     },
	I1216 06:32:25.189223 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189230 1633651 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1216 06:32:25.189233 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189239 1633651 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1216 06:32:25.189242 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189246 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189255 1633651 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1216 06:32:25.189263 1633651 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1216 06:32:25.189266 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189270 1633651 command_runner.go:130] >       "size":  "74491780",
	I1216 06:32:25.189274 1633651 command_runner.go:130] >       "username":  "nonroot",
	I1216 06:32:25.189278 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.189281 1633651 command_runner.go:130] >     },
	I1216 06:32:25.189284 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189291 1633651 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1216 06:32:25.189295 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189300 1633651 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1216 06:32:25.189309 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189313 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189322 1633651 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1216 06:32:25.189330 1633651 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1216 06:32:25.189333 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189337 1633651 command_runner.go:130] >       "size":  "60857170",
	I1216 06:32:25.189341 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.189345 1633651 command_runner.go:130] >         "value":  "0"
	I1216 06:32:25.189348 1633651 command_runner.go:130] >       },
	I1216 06:32:25.189357 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.189361 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.189364 1633651 command_runner.go:130] >     },
	I1216 06:32:25.189367 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189375 1633651 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1216 06:32:25.189378 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189384 1633651 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1216 06:32:25.189387 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189391 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189399 1633651 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1216 06:32:25.189407 1633651 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1216 06:32:25.189411 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189420 1633651 command_runner.go:130] >       "size":  "84949999",
	I1216 06:32:25.189423 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.189427 1633651 command_runner.go:130] >         "value":  "0"
	I1216 06:32:25.189431 1633651 command_runner.go:130] >       },
	I1216 06:32:25.189435 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.189439 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.189444 1633651 command_runner.go:130] >     },
	I1216 06:32:25.189453 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189460 1633651 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1216 06:32:25.189464 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189469 1633651 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1216 06:32:25.189473 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189486 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189495 1633651 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1216 06:32:25.189505 1633651 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1216 06:32:25.189508 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189513 1633651 command_runner.go:130] >       "size":  "72170325",
	I1216 06:32:25.189516 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.189524 1633651 command_runner.go:130] >         "value":  "0"
	I1216 06:32:25.189527 1633651 command_runner.go:130] >       },
	I1216 06:32:25.189531 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.189536 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.189539 1633651 command_runner.go:130] >     },
	I1216 06:32:25.189542 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189549 1633651 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1216 06:32:25.189553 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189558 1633651 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1216 06:32:25.189561 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189564 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189572 1633651 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1216 06:32:25.189580 1633651 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1216 06:32:25.189583 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189587 1633651 command_runner.go:130] >       "size":  "74106775",
	I1216 06:32:25.189591 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.189595 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.189597 1633651 command_runner.go:130] >     },
	I1216 06:32:25.189600 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189607 1633651 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1216 06:32:25.189611 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189616 1633651 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1216 06:32:25.189620 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189623 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189631 1633651 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1216 06:32:25.189649 1633651 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1216 06:32:25.189653 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189660 1633651 command_runner.go:130] >       "size":  "49822549",
	I1216 06:32:25.189664 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.189668 1633651 command_runner.go:130] >         "value":  "0"
	I1216 06:32:25.189671 1633651 command_runner.go:130] >       },
	I1216 06:32:25.189675 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.189679 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.189682 1633651 command_runner.go:130] >     },
	I1216 06:32:25.189685 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189691 1633651 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1216 06:32:25.189695 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189700 1633651 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1216 06:32:25.189703 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189707 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189714 1633651 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1216 06:32:25.189722 1633651 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1216 06:32:25.189725 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189729 1633651 command_runner.go:130] >       "size":  "519884",
	I1216 06:32:25.189732 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.189736 1633651 command_runner.go:130] >         "value":  "65535"
	I1216 06:32:25.189740 1633651 command_runner.go:130] >       },
	I1216 06:32:25.189744 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.189748 1633651 command_runner.go:130] >       "pinned":  true
	I1216 06:32:25.189751 1633651 command_runner.go:130] >     }
	I1216 06:32:25.189754 1633651 command_runner.go:130] >   ]
	I1216 06:32:25.189758 1633651 command_runner.go:130] > }
	I1216 06:32:25.192082 1633651 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 06:32:25.192103 1633651 cache_images.go:86] Images are preloaded, skipping loading
	I1216 06:32:25.192110 1633651 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1216 06:32:25.192213 1633651 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-364120 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 06:32:25.192293 1633651 ssh_runner.go:195] Run: crio config
	I1216 06:32:25.241430 1633651 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1216 06:32:25.241454 1633651 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1216 06:32:25.241463 1633651 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1216 06:32:25.241467 1633651 command_runner.go:130] > #
	I1216 06:32:25.241474 1633651 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1216 06:32:25.241481 1633651 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1216 06:32:25.241487 1633651 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1216 06:32:25.241503 1633651 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1216 06:32:25.241507 1633651 command_runner.go:130] > # reload'.
	I1216 06:32:25.241513 1633651 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1216 06:32:25.241520 1633651 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1216 06:32:25.241526 1633651 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1216 06:32:25.241533 1633651 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1216 06:32:25.241546 1633651 command_runner.go:130] > [crio]
	I1216 06:32:25.241552 1633651 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1216 06:32:25.241558 1633651 command_runner.go:130] > # containers images, in this directory.
	I1216 06:32:25.242467 1633651 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1216 06:32:25.242525 1633651 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1216 06:32:25.243204 1633651 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1216 06:32:25.243220 1633651 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1216 06:32:25.243745 1633651 command_runner.go:130] > # imagestore = ""
	I1216 06:32:25.243759 1633651 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1216 06:32:25.243765 1633651 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1216 06:32:25.244384 1633651 command_runner.go:130] > # storage_driver = "overlay"
	I1216 06:32:25.244405 1633651 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1216 06:32:25.244412 1633651 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1216 06:32:25.244775 1633651 command_runner.go:130] > # storage_option = [
	I1216 06:32:25.245138 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.245151 1633651 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1216 06:32:25.245190 1633651 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1216 06:32:25.245804 1633651 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1216 06:32:25.245817 1633651 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1216 06:32:25.245829 1633651 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1216 06:32:25.245834 1633651 command_runner.go:130] > # always happen on a node reboot
	I1216 06:32:25.246485 1633651 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1216 06:32:25.246511 1633651 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1216 06:32:25.246534 1633651 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1216 06:32:25.246545 1633651 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1216 06:32:25.247059 1633651 command_runner.go:130] > # version_file_persist = ""
	I1216 06:32:25.247081 1633651 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1216 06:32:25.247091 1633651 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1216 06:32:25.247784 1633651 command_runner.go:130] > # internal_wipe = true
	I1216 06:32:25.247805 1633651 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1216 06:32:25.247812 1633651 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1216 06:32:25.248459 1633651 command_runner.go:130] > # internal_repair = true
	I1216 06:32:25.248493 1633651 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1216 06:32:25.248501 1633651 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1216 06:32:25.248507 1633651 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1216 06:32:25.249140 1633651 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1216 06:32:25.249157 1633651 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1216 06:32:25.249161 1633651 command_runner.go:130] > [crio.api]
	I1216 06:32:25.249167 1633651 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1216 06:32:25.251400 1633651 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1216 06:32:25.251419 1633651 command_runner.go:130] > # IP address on which the stream server will listen.
	I1216 06:32:25.251426 1633651 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1216 06:32:25.251453 1633651 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1216 06:32:25.251465 1633651 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1216 06:32:25.251470 1633651 command_runner.go:130] > # stream_port = "0"
	I1216 06:32:25.251476 1633651 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1216 06:32:25.251480 1633651 command_runner.go:130] > # stream_enable_tls = false
	I1216 06:32:25.251487 1633651 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1216 06:32:25.251494 1633651 command_runner.go:130] > # stream_idle_timeout = ""
	I1216 06:32:25.251501 1633651 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1216 06:32:25.251510 1633651 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1216 06:32:25.251527 1633651 command_runner.go:130] > # stream_tls_cert = ""
	I1216 06:32:25.251540 1633651 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1216 06:32:25.251546 1633651 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1216 06:32:25.251563 1633651 command_runner.go:130] > # stream_tls_key = ""
	I1216 06:32:25.251575 1633651 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1216 06:32:25.251585 1633651 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1216 06:32:25.251591 1633651 command_runner.go:130] > # automatically pick up the changes.
	I1216 06:32:25.251603 1633651 command_runner.go:130] > # stream_tls_ca = ""
	I1216 06:32:25.251622 1633651 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1216 06:32:25.251658 1633651 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1216 06:32:25.251672 1633651 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1216 06:32:25.251677 1633651 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1216 06:32:25.251692 1633651 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1216 06:32:25.251703 1633651 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1216 06:32:25.251707 1633651 command_runner.go:130] > [crio.runtime]
	I1216 06:32:25.251713 1633651 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1216 06:32:25.251719 1633651 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1216 06:32:25.251735 1633651 command_runner.go:130] > # "nofile=1024:2048"
	I1216 06:32:25.251746 1633651 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1216 06:32:25.251751 1633651 command_runner.go:130] > # default_ulimits = [
	I1216 06:32:25.251754 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.251760 1633651 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1216 06:32:25.251767 1633651 command_runner.go:130] > # no_pivot = false
	I1216 06:32:25.251773 1633651 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1216 06:32:25.251779 1633651 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1216 06:32:25.251788 1633651 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1216 06:32:25.251794 1633651 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1216 06:32:25.251799 1633651 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1216 06:32:25.251815 1633651 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1216 06:32:25.251827 1633651 command_runner.go:130] > # conmon = ""
	I1216 06:32:25.251832 1633651 command_runner.go:130] > # Cgroup setting for conmon
	I1216 06:32:25.251838 1633651 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1216 06:32:25.251853 1633651 command_runner.go:130] > conmon_cgroup = "pod"
	I1216 06:32:25.251866 1633651 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1216 06:32:25.251872 1633651 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1216 06:32:25.251879 1633651 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1216 06:32:25.251884 1633651 command_runner.go:130] > # conmon_env = [
	I1216 06:32:25.251887 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.251893 1633651 command_runner.go:130] > # Additional environment variables to set for all the
	I1216 06:32:25.251898 1633651 command_runner.go:130] > # containers. These are overridden if set in the
	I1216 06:32:25.251906 1633651 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1216 06:32:25.251910 1633651 command_runner.go:130] > # default_env = [
	I1216 06:32:25.251931 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.251956 1633651 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1216 06:32:25.251970 1633651 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1216 06:32:25.251982 1633651 command_runner.go:130] > # selinux = false
	I1216 06:32:25.251995 1633651 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1216 06:32:25.252003 1633651 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1216 06:32:25.252037 1633651 command_runner.go:130] > # This option supports live configuration reload.
	I1216 06:32:25.252047 1633651 command_runner.go:130] > # seccomp_profile = ""
	I1216 06:32:25.252055 1633651 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1216 06:32:25.252060 1633651 command_runner.go:130] > # This option supports live configuration reload.
	I1216 06:32:25.252066 1633651 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1216 06:32:25.252073 1633651 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1216 06:32:25.252082 1633651 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1216 06:32:25.252088 1633651 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1216 06:32:25.252097 1633651 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1216 06:32:25.252125 1633651 command_runner.go:130] > # This option supports live configuration reload.
	I1216 06:32:25.252136 1633651 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1216 06:32:25.252147 1633651 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1216 06:32:25.252161 1633651 command_runner.go:130] > # the cgroup blockio controller.
	I1216 06:32:25.252165 1633651 command_runner.go:130] > # blockio_config_file = ""
	I1216 06:32:25.252172 1633651 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1216 06:32:25.252176 1633651 command_runner.go:130] > # blockio parameters.
	I1216 06:32:25.252182 1633651 command_runner.go:130] > # blockio_reload = false
	I1216 06:32:25.252207 1633651 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1216 06:32:25.252224 1633651 command_runner.go:130] > # irqbalance daemon.
	I1216 06:32:25.252230 1633651 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1216 06:32:25.252251 1633651 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1216 06:32:25.252260 1633651 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1216 06:32:25.252270 1633651 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1216 06:32:25.252276 1633651 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1216 06:32:25.252283 1633651 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1216 06:32:25.252291 1633651 command_runner.go:130] > # This option supports live configuration reload.
	I1216 06:32:25.252295 1633651 command_runner.go:130] > # rdt_config_file = ""
	I1216 06:32:25.252300 1633651 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1216 06:32:25.252305 1633651 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1216 06:32:25.252321 1633651 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1216 06:32:25.252339 1633651 command_runner.go:130] > # separate_pull_cgroup = ""
	I1216 06:32:25.252356 1633651 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1216 06:32:25.252372 1633651 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1216 06:32:25.252380 1633651 command_runner.go:130] > # will be added.
	I1216 06:32:25.252385 1633651 command_runner.go:130] > # default_capabilities = [
	I1216 06:32:25.252388 1633651 command_runner.go:130] > # 	"CHOWN",
	I1216 06:32:25.252392 1633651 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1216 06:32:25.252405 1633651 command_runner.go:130] > # 	"FSETID",
	I1216 06:32:25.252411 1633651 command_runner.go:130] > # 	"FOWNER",
	I1216 06:32:25.252415 1633651 command_runner.go:130] > # 	"SETGID",
	I1216 06:32:25.252431 1633651 command_runner.go:130] > # 	"SETUID",
	I1216 06:32:25.252493 1633651 command_runner.go:130] > # 	"SETPCAP",
	I1216 06:32:25.252505 1633651 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1216 06:32:25.252509 1633651 command_runner.go:130] > # 	"KILL",
	I1216 06:32:25.252512 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.252520 1633651 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1216 06:32:25.252530 1633651 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1216 06:32:25.252534 1633651 command_runner.go:130] > # add_inheritable_capabilities = false
	I1216 06:32:25.252541 1633651 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1216 06:32:25.252547 1633651 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1216 06:32:25.252564 1633651 command_runner.go:130] > default_sysctls = [
	I1216 06:32:25.252577 1633651 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1216 06:32:25.252581 1633651 command_runner.go:130] > ]
	I1216 06:32:25.252587 1633651 command_runner.go:130] > # List of devices on the host that a
	I1216 06:32:25.252597 1633651 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1216 06:32:25.252601 1633651 command_runner.go:130] > # allowed_devices = [
	I1216 06:32:25.252605 1633651 command_runner.go:130] > # 	"/dev/fuse",
	I1216 06:32:25.252610 1633651 command_runner.go:130] > # 	"/dev/net/tun",
	I1216 06:32:25.252613 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.252624 1633651 command_runner.go:130] > # List of additional devices. specified as
	I1216 06:32:25.252649 1633651 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1216 06:32:25.252661 1633651 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1216 06:32:25.252667 1633651 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1216 06:32:25.252677 1633651 command_runner.go:130] > # additional_devices = [
	I1216 06:32:25.252685 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.252691 1633651 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1216 06:32:25.252703 1633651 command_runner.go:130] > # cdi_spec_dirs = [
	I1216 06:32:25.252716 1633651 command_runner.go:130] > # 	"/etc/cdi",
	I1216 06:32:25.252739 1633651 command_runner.go:130] > # 	"/var/run/cdi",
	I1216 06:32:25.252743 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.252750 1633651 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1216 06:32:25.252759 1633651 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1216 06:32:25.252769 1633651 command_runner.go:130] > # Defaults to false.
	I1216 06:32:25.252779 1633651 command_runner.go:130] > # device_ownership_from_security_context = false
	I1216 06:32:25.252786 1633651 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1216 06:32:25.252792 1633651 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1216 06:32:25.252807 1633651 command_runner.go:130] > # hooks_dir = [
	I1216 06:32:25.252819 1633651 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1216 06:32:25.252823 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.252829 1633651 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1216 06:32:25.252851 1633651 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1216 06:32:25.252857 1633651 command_runner.go:130] > # its default mounts from the following two files:
	I1216 06:32:25.252863 1633651 command_runner.go:130] > #
	I1216 06:32:25.252870 1633651 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1216 06:32:25.252876 1633651 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1216 06:32:25.252882 1633651 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1216 06:32:25.252886 1633651 command_runner.go:130] > #
	I1216 06:32:25.252893 1633651 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1216 06:32:25.252917 1633651 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1216 06:32:25.252940 1633651 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1216 06:32:25.252947 1633651 command_runner.go:130] > #      only add mounts it finds in this file.
	I1216 06:32:25.252950 1633651 command_runner.go:130] > #
	I1216 06:32:25.252955 1633651 command_runner.go:130] > # default_mounts_file = ""
	I1216 06:32:25.252963 1633651 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1216 06:32:25.252970 1633651 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1216 06:32:25.252977 1633651 command_runner.go:130] > # pids_limit = -1
	I1216 06:32:25.252989 1633651 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1216 06:32:25.253005 1633651 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1216 06:32:25.253018 1633651 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1216 06:32:25.253043 1633651 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1216 06:32:25.253055 1633651 command_runner.go:130] > # log_size_max = -1
	I1216 06:32:25.253064 1633651 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1216 06:32:25.253068 1633651 command_runner.go:130] > # log_to_journald = false
	I1216 06:32:25.253080 1633651 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1216 06:32:25.253090 1633651 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1216 06:32:25.253096 1633651 command_runner.go:130] > # Path to directory for container attach sockets.
	I1216 06:32:25.253101 1633651 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1216 06:32:25.253123 1633651 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1216 06:32:25.253128 1633651 command_runner.go:130] > # bind_mount_prefix = ""
	I1216 06:32:25.253151 1633651 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1216 06:32:25.253157 1633651 command_runner.go:130] > # read_only = false
	I1216 06:32:25.253169 1633651 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1216 06:32:25.253183 1633651 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1216 06:32:25.253188 1633651 command_runner.go:130] > # live configuration reload.
	I1216 06:32:25.253196 1633651 command_runner.go:130] > # log_level = "info"
	I1216 06:32:25.253219 1633651 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1216 06:32:25.253232 1633651 command_runner.go:130] > # This option supports live configuration reload.
	I1216 06:32:25.253236 1633651 command_runner.go:130] > # log_filter = ""
	I1216 06:32:25.253252 1633651 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1216 06:32:25.253264 1633651 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1216 06:32:25.253273 1633651 command_runner.go:130] > # separated by comma.
	I1216 06:32:25.253281 1633651 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1216 06:32:25.253287 1633651 command_runner.go:130] > # uid_mappings = ""
	I1216 06:32:25.253293 1633651 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1216 06:32:25.253300 1633651 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1216 06:32:25.253311 1633651 command_runner.go:130] > # separated by comma.
	I1216 06:32:25.253328 1633651 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1216 06:32:25.253340 1633651 command_runner.go:130] > # gid_mappings = ""
	I1216 06:32:25.253346 1633651 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1216 06:32:25.253362 1633651 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1216 06:32:25.253369 1633651 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1216 06:32:25.253377 1633651 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1216 06:32:25.253385 1633651 command_runner.go:130] > # minimum_mappable_uid = -1
	I1216 06:32:25.253391 1633651 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1216 06:32:25.253408 1633651 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1216 06:32:25.253421 1633651 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1216 06:32:25.253438 1633651 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1216 06:32:25.253448 1633651 command_runner.go:130] > # minimum_mappable_gid = -1
	I1216 06:32:25.253459 1633651 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1216 06:32:25.253468 1633651 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1216 06:32:25.253475 1633651 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1216 06:32:25.253481 1633651 command_runner.go:130] > # ctr_stop_timeout = 30
	I1216 06:32:25.253487 1633651 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1216 06:32:25.253493 1633651 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1216 06:32:25.253518 1633651 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1216 06:32:25.253530 1633651 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1216 06:32:25.253541 1633651 command_runner.go:130] > # drop_infra_ctr = true
	I1216 06:32:25.253557 1633651 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1216 06:32:25.253566 1633651 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1216 06:32:25.253573 1633651 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1216 06:32:25.253581 1633651 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1216 06:32:25.253607 1633651 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1216 06:32:25.253614 1633651 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1216 06:32:25.253630 1633651 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1216 06:32:25.253643 1633651 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1216 06:32:25.253647 1633651 command_runner.go:130] > # shared_cpuset = ""
	I1216 06:32:25.253653 1633651 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1216 06:32:25.253666 1633651 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1216 06:32:25.253670 1633651 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1216 06:32:25.253681 1633651 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1216 06:32:25.253688 1633651 command_runner.go:130] > # pinns_path = ""
	I1216 06:32:25.253694 1633651 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1216 06:32:25.253718 1633651 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1216 06:32:25.253731 1633651 command_runner.go:130] > # enable_criu_support = true
	I1216 06:32:25.253736 1633651 command_runner.go:130] > # Enable/disable the generation of the container,
	I1216 06:32:25.253754 1633651 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1216 06:32:25.253764 1633651 command_runner.go:130] > # enable_pod_events = false
	I1216 06:32:25.253771 1633651 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1216 06:32:25.253776 1633651 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1216 06:32:25.253786 1633651 command_runner.go:130] > # default_runtime = "crun"
	I1216 06:32:25.253795 1633651 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1216 06:32:25.253803 1633651 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1216 06:32:25.253814 1633651 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1216 06:32:25.253835 1633651 command_runner.go:130] > # creation as a file is not desired either.
	I1216 06:32:25.253853 1633651 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1216 06:32:25.253868 1633651 command_runner.go:130] > # the hostname is being managed dynamically.
	I1216 06:32:25.253876 1633651 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1216 06:32:25.253879 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.253885 1633651 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1216 06:32:25.253891 1633651 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1216 06:32:25.253923 1633651 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1216 06:32:25.253938 1633651 command_runner.go:130] > # Each entry in the table should follow the format:
	I1216 06:32:25.253941 1633651 command_runner.go:130] > #
	I1216 06:32:25.253946 1633651 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1216 06:32:25.253955 1633651 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1216 06:32:25.253959 1633651 command_runner.go:130] > # runtime_type = "oci"
	I1216 06:32:25.253977 1633651 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1216 06:32:25.253987 1633651 command_runner.go:130] > # inherit_default_runtime = false
	I1216 06:32:25.254007 1633651 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1216 06:32:25.254012 1633651 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1216 06:32:25.254016 1633651 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1216 06:32:25.254020 1633651 command_runner.go:130] > # monitor_env = []
	I1216 06:32:25.254034 1633651 command_runner.go:130] > # privileged_without_host_devices = false
	I1216 06:32:25.254044 1633651 command_runner.go:130] > # allowed_annotations = []
	I1216 06:32:25.254060 1633651 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1216 06:32:25.254072 1633651 command_runner.go:130] > # no_sync_log = false
	I1216 06:32:25.254076 1633651 command_runner.go:130] > # default_annotations = {}
	I1216 06:32:25.254081 1633651 command_runner.go:130] > # stream_websockets = false
	I1216 06:32:25.254088 1633651 command_runner.go:130] > # seccomp_profile = ""
	I1216 06:32:25.254142 1633651 command_runner.go:130] > # Where:
	I1216 06:32:25.254155 1633651 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1216 06:32:25.254162 1633651 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1216 06:32:25.254179 1633651 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1216 06:32:25.254193 1633651 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1216 06:32:25.254197 1633651 command_runner.go:130] > #   in $PATH.
	I1216 06:32:25.254203 1633651 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1216 06:32:25.254216 1633651 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1216 06:32:25.254223 1633651 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1216 06:32:25.254226 1633651 command_runner.go:130] > #   state.
	I1216 06:32:25.254232 1633651 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1216 06:32:25.254254 1633651 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1216 06:32:25.254272 1633651 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1216 06:32:25.254285 1633651 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1216 06:32:25.254290 1633651 command_runner.go:130] > #   the values from the default runtime on load time.
	I1216 06:32:25.254302 1633651 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1216 06:32:25.254311 1633651 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1216 06:32:25.254317 1633651 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1216 06:32:25.254340 1633651 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1216 06:32:25.254347 1633651 command_runner.go:130] > #   The currently recognized values are:
	I1216 06:32:25.254369 1633651 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1216 06:32:25.254378 1633651 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1216 06:32:25.254387 1633651 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1216 06:32:25.254393 1633651 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1216 06:32:25.254405 1633651 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1216 06:32:25.254419 1633651 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1216 06:32:25.254436 1633651 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1216 06:32:25.254450 1633651 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1216 06:32:25.254456 1633651 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1216 06:32:25.254476 1633651 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1216 06:32:25.254491 1633651 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1216 06:32:25.254498 1633651 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1216 06:32:25.254509 1633651 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1216 06:32:25.254520 1633651 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1216 06:32:25.254530 1633651 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1216 06:32:25.254561 1633651 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1216 06:32:25.254585 1633651 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1216 06:32:25.254596 1633651 command_runner.go:130] > #   deprecated option "conmon".
	I1216 06:32:25.254603 1633651 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1216 06:32:25.254613 1633651 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1216 06:32:25.254624 1633651 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1216 06:32:25.254629 1633651 command_runner.go:130] > #   should be moved to the container's cgroup
	I1216 06:32:25.254639 1633651 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1216 06:32:25.254660 1633651 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1216 06:32:25.254668 1633651 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1216 06:32:25.254672 1633651 command_runner.go:130] > #   conmon-rs by using:
	I1216 06:32:25.254689 1633651 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1216 06:32:25.254709 1633651 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1216 06:32:25.254724 1633651 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1216 06:32:25.254731 1633651 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1216 06:32:25.254739 1633651 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1216 06:32:25.254746 1633651 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1216 06:32:25.254767 1633651 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1216 06:32:25.254780 1633651 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1216 06:32:25.254799 1633651 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1216 06:32:25.254817 1633651 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1216 06:32:25.254822 1633651 command_runner.go:130] > #   when a machine crash happens.
	I1216 06:32:25.254829 1633651 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1216 06:32:25.254840 1633651 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1216 06:32:25.254848 1633651 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1216 06:32:25.254855 1633651 command_runner.go:130] > #   seccomp profile for the runtime.
	I1216 06:32:25.254861 1633651 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1216 06:32:25.254884 1633651 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1216 06:32:25.254894 1633651 command_runner.go:130] > #
	I1216 06:32:25.254899 1633651 command_runner.go:130] > # Using the seccomp notifier feature:
	I1216 06:32:25.254902 1633651 command_runner.go:130] > #
	I1216 06:32:25.254922 1633651 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1216 06:32:25.254936 1633651 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1216 06:32:25.254939 1633651 command_runner.go:130] > #
	I1216 06:32:25.254946 1633651 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1216 06:32:25.254954 1633651 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1216 06:32:25.254957 1633651 command_runner.go:130] > #
	I1216 06:32:25.254964 1633651 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1216 06:32:25.254970 1633651 command_runner.go:130] > # feature.
	I1216 06:32:25.254973 1633651 command_runner.go:130] > #
	I1216 06:32:25.254979 1633651 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1216 06:32:25.255001 1633651 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1216 06:32:25.255015 1633651 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1216 06:32:25.255021 1633651 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1216 06:32:25.255037 1633651 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1216 06:32:25.255046 1633651 command_runner.go:130] > #
	I1216 06:32:25.255053 1633651 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1216 06:32:25.255059 1633651 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1216 06:32:25.255065 1633651 command_runner.go:130] > #
	I1216 06:32:25.255071 1633651 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1216 06:32:25.255076 1633651 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1216 06:32:25.255079 1633651 command_runner.go:130] > #
	I1216 06:32:25.255089 1633651 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1216 06:32:25.255098 1633651 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1216 06:32:25.255116 1633651 command_runner.go:130] > # limitation.
	I1216 06:32:25.255127 1633651 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1216 06:32:25.255133 1633651 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1216 06:32:25.255143 1633651 command_runner.go:130] > runtime_type = ""
	I1216 06:32:25.255151 1633651 command_runner.go:130] > runtime_root = "/run/crun"
	I1216 06:32:25.255155 1633651 command_runner.go:130] > inherit_default_runtime = false
	I1216 06:32:25.255165 1633651 command_runner.go:130] > runtime_config_path = ""
	I1216 06:32:25.255174 1633651 command_runner.go:130] > container_min_memory = ""
	I1216 06:32:25.255210 1633651 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1216 06:32:25.255222 1633651 command_runner.go:130] > monitor_cgroup = "pod"
	I1216 06:32:25.255226 1633651 command_runner.go:130] > monitor_exec_cgroup = ""
	I1216 06:32:25.255231 1633651 command_runner.go:130] > allowed_annotations = [
	I1216 06:32:25.255235 1633651 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1216 06:32:25.255238 1633651 command_runner.go:130] > ]
	I1216 06:32:25.255247 1633651 command_runner.go:130] > privileged_without_host_devices = false
	I1216 06:32:25.255251 1633651 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1216 06:32:25.255267 1633651 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1216 06:32:25.255271 1633651 command_runner.go:130] > runtime_type = ""
	I1216 06:32:25.255274 1633651 command_runner.go:130] > runtime_root = "/run/runc"
	I1216 06:32:25.255290 1633651 command_runner.go:130] > inherit_default_runtime = false
	I1216 06:32:25.255300 1633651 command_runner.go:130] > runtime_config_path = ""
	I1216 06:32:25.255305 1633651 command_runner.go:130] > container_min_memory = ""
	I1216 06:32:25.255324 1633651 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1216 06:32:25.255354 1633651 command_runner.go:130] > monitor_cgroup = "pod"
	I1216 06:32:25.255360 1633651 command_runner.go:130] > monitor_exec_cgroup = ""
	I1216 06:32:25.255364 1633651 command_runner.go:130] > privileged_without_host_devices = false
	I1216 06:32:25.255371 1633651 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1216 06:32:25.255376 1633651 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1216 06:32:25.255383 1633651 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1216 06:32:25.255413 1633651 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1216 06:32:25.255438 1633651 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1216 06:32:25.255450 1633651 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1216 06:32:25.255462 1633651 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1216 06:32:25.255468 1633651 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1216 06:32:25.255478 1633651 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1216 06:32:25.255505 1633651 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1216 06:32:25.255522 1633651 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1216 06:32:25.255540 1633651 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1216 06:32:25.255551 1633651 command_runner.go:130] > # Example:
	I1216 06:32:25.255560 1633651 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1216 06:32:25.255569 1633651 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1216 06:32:25.255576 1633651 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1216 06:32:25.255584 1633651 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1216 06:32:25.255587 1633651 command_runner.go:130] > # cpuset = "0-1"
	I1216 06:32:25.255591 1633651 command_runner.go:130] > # cpushares = "5"
	I1216 06:32:25.255595 1633651 command_runner.go:130] > # cpuquota = "1000"
	I1216 06:32:25.255625 1633651 command_runner.go:130] > # cpuperiod = "100000"
	I1216 06:32:25.255636 1633651 command_runner.go:130] > # cpulimit = "35"
	I1216 06:32:25.255640 1633651 command_runner.go:130] > # Where:
	I1216 06:32:25.255645 1633651 command_runner.go:130] > # The workload name is workload-type.
	I1216 06:32:25.255652 1633651 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1216 06:32:25.255661 1633651 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1216 06:32:25.255667 1633651 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1216 06:32:25.255678 1633651 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1216 06:32:25.255686 1633651 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1216 06:32:25.255715 1633651 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1216 06:32:25.255733 1633651 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1216 06:32:25.255738 1633651 command_runner.go:130] > # Default value is set to true
	I1216 06:32:25.255749 1633651 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1216 06:32:25.255755 1633651 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1216 06:32:25.255760 1633651 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1216 06:32:25.255767 1633651 command_runner.go:130] > # Default value is set to 'false'
	I1216 06:32:25.255771 1633651 command_runner.go:130] > # disable_hostport_mapping = false
	I1216 06:32:25.255776 1633651 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1216 06:32:25.255807 1633651 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1216 06:32:25.255817 1633651 command_runner.go:130] > # timezone = ""
	I1216 06:32:25.255824 1633651 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1216 06:32:25.255830 1633651 command_runner.go:130] > #
	I1216 06:32:25.255836 1633651 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1216 06:32:25.255846 1633651 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1216 06:32:25.255850 1633651 command_runner.go:130] > [crio.image]
	I1216 06:32:25.255856 1633651 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1216 06:32:25.255866 1633651 command_runner.go:130] > # default_transport = "docker://"
	I1216 06:32:25.255888 1633651 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1216 06:32:25.255905 1633651 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1216 06:32:25.255915 1633651 command_runner.go:130] > # global_auth_file = ""
	I1216 06:32:25.255920 1633651 command_runner.go:130] > # The image used to instantiate infra containers.
	I1216 06:32:25.255925 1633651 command_runner.go:130] > # This option supports live configuration reload.
	I1216 06:32:25.255931 1633651 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1216 06:32:25.255940 1633651 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1216 06:32:25.255955 1633651 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1216 06:32:25.255961 1633651 command_runner.go:130] > # This option supports live configuration reload.
	I1216 06:32:25.255968 1633651 command_runner.go:130] > # pause_image_auth_file = ""
	I1216 06:32:25.255989 1633651 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1216 06:32:25.255997 1633651 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1216 06:32:25.256008 1633651 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1216 06:32:25.256014 1633651 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1216 06:32:25.256020 1633651 command_runner.go:130] > # pause_command = "/pause"
	I1216 06:32:25.256026 1633651 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1216 06:32:25.256032 1633651 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1216 06:32:25.256042 1633651 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1216 06:32:25.256057 1633651 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1216 06:32:25.256069 1633651 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1216 06:32:25.256085 1633651 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1216 06:32:25.256096 1633651 command_runner.go:130] > # pinned_images = [
	I1216 06:32:25.256100 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.256106 1633651 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1216 06:32:25.256116 1633651 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1216 06:32:25.256122 1633651 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1216 06:32:25.256131 1633651 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1216 06:32:25.256139 1633651 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1216 06:32:25.256144 1633651 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1216 06:32:25.256150 1633651 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1216 06:32:25.256179 1633651 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1216 06:32:25.256192 1633651 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1216 06:32:25.256207 1633651 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1216 06:32:25.256217 1633651 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1216 06:32:25.256222 1633651 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1216 06:32:25.256229 1633651 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1216 06:32:25.256238 1633651 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1216 06:32:25.256242 1633651 command_runner.go:130] > # changing them here.
	I1216 06:32:25.256266 1633651 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1216 06:32:25.256283 1633651 command_runner.go:130] > # insecure_registries = [
	I1216 06:32:25.256293 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.256303 1633651 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1216 06:32:25.256311 1633651 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1216 06:32:25.256321 1633651 command_runner.go:130] > # image_volumes = "mkdir"
	I1216 06:32:25.256331 1633651 command_runner.go:130] > # Temporary directory to use for storing big files
	I1216 06:32:25.256347 1633651 command_runner.go:130] > # big_files_temporary_dir = ""
	I1216 06:32:25.256360 1633651 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1216 06:32:25.256372 1633651 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1216 06:32:25.256380 1633651 command_runner.go:130] > # auto_reload_registries = false
	I1216 06:32:25.256386 1633651 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1216 06:32:25.256395 1633651 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1216 06:32:25.256404 1633651 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1216 06:32:25.256408 1633651 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1216 06:32:25.256422 1633651 command_runner.go:130] > # The mode of short name resolution.
	I1216 06:32:25.256436 1633651 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1216 06:32:25.256452 1633651 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1216 06:32:25.256479 1633651 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1216 06:32:25.256484 1633651 command_runner.go:130] > # short_name_mode = "enforcing"
	I1216 06:32:25.256490 1633651 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1216 06:32:25.256497 1633651 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1216 06:32:25.256512 1633651 command_runner.go:130] > # oci_artifact_mount_support = true
	I1216 06:32:25.256532 1633651 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1216 06:32:25.256544 1633651 command_runner.go:130] > # CNI plugins.
	I1216 06:32:25.256548 1633651 command_runner.go:130] > [crio.network]
	I1216 06:32:25.256566 1633651 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1216 06:32:25.256583 1633651 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1216 06:32:25.256590 1633651 command_runner.go:130] > # cni_default_network = ""
	I1216 06:32:25.256596 1633651 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1216 06:32:25.256603 1633651 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1216 06:32:25.256610 1633651 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1216 06:32:25.256626 1633651 command_runner.go:130] > # plugin_dirs = [
	I1216 06:32:25.256650 1633651 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1216 06:32:25.256654 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.256678 1633651 command_runner.go:130] > # List of included pod metrics.
	I1216 06:32:25.256691 1633651 command_runner.go:130] > # included_pod_metrics = [
	I1216 06:32:25.256695 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.256701 1633651 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1216 06:32:25.256708 1633651 command_runner.go:130] > [crio.metrics]
	I1216 06:32:25.256712 1633651 command_runner.go:130] > # Globally enable or disable metrics support.
	I1216 06:32:25.256717 1633651 command_runner.go:130] > # enable_metrics = false
	I1216 06:32:25.256723 1633651 command_runner.go:130] > # Specify enabled metrics collectors.
	I1216 06:32:25.256728 1633651 command_runner.go:130] > # Per default all metrics are enabled.
	I1216 06:32:25.256737 1633651 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1216 06:32:25.256762 1633651 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1216 06:32:25.256774 1633651 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1216 06:32:25.256778 1633651 command_runner.go:130] > # metrics_collectors = [
	I1216 06:32:25.256799 1633651 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1216 06:32:25.256808 1633651 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1216 06:32:25.256813 1633651 command_runner.go:130] > # 	"containers_oom_total",
	I1216 06:32:25.256818 1633651 command_runner.go:130] > # 	"processes_defunct",
	I1216 06:32:25.256829 1633651 command_runner.go:130] > # 	"operations_total",
	I1216 06:32:25.256834 1633651 command_runner.go:130] > # 	"operations_latency_seconds",
	I1216 06:32:25.256839 1633651 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1216 06:32:25.256842 1633651 command_runner.go:130] > # 	"operations_errors_total",
	I1216 06:32:25.256847 1633651 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1216 06:32:25.256851 1633651 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1216 06:32:25.256855 1633651 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1216 06:32:25.256869 1633651 command_runner.go:130] > # 	"image_pulls_success_total",
	I1216 06:32:25.256888 1633651 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1216 06:32:25.256897 1633651 command_runner.go:130] > # 	"containers_oom_count_total",
	I1216 06:32:25.256901 1633651 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1216 06:32:25.256906 1633651 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1216 06:32:25.256913 1633651 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1216 06:32:25.256916 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.256923 1633651 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1216 06:32:25.256930 1633651 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1216 06:32:25.256944 1633651 command_runner.go:130] > # The port on which the metrics server will listen.
	I1216 06:32:25.256952 1633651 command_runner.go:130] > # metrics_port = 9090
	I1216 06:32:25.256958 1633651 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1216 06:32:25.256967 1633651 command_runner.go:130] > # metrics_socket = ""
	I1216 06:32:25.256972 1633651 command_runner.go:130] > # The certificate for the secure metrics server.
	I1216 06:32:25.256979 1633651 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1216 06:32:25.256987 1633651 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1216 06:32:25.257000 1633651 command_runner.go:130] > # certificate on any modification event.
	I1216 06:32:25.257004 1633651 command_runner.go:130] > # metrics_cert = ""
	I1216 06:32:25.257023 1633651 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1216 06:32:25.257034 1633651 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1216 06:32:25.257039 1633651 command_runner.go:130] > # metrics_key = ""
	I1216 06:32:25.257061 1633651 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1216 06:32:25.257070 1633651 command_runner.go:130] > [crio.tracing]
	I1216 06:32:25.257076 1633651 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1216 06:32:25.257080 1633651 command_runner.go:130] > # enable_tracing = false
	I1216 06:32:25.257088 1633651 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1216 06:32:25.257099 1633651 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1216 06:32:25.257111 1633651 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1216 06:32:25.257127 1633651 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1216 06:32:25.257138 1633651 command_runner.go:130] > # CRI-O NRI configuration.
	I1216 06:32:25.257142 1633651 command_runner.go:130] > [crio.nri]
	I1216 06:32:25.257156 1633651 command_runner.go:130] > # Globally enable or disable NRI.
	I1216 06:32:25.257167 1633651 command_runner.go:130] > # enable_nri = true
	I1216 06:32:25.257172 1633651 command_runner.go:130] > # NRI socket to listen on.
	I1216 06:32:25.257181 1633651 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1216 06:32:25.257193 1633651 command_runner.go:130] > # NRI plugin directory to use.
	I1216 06:32:25.257198 1633651 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1216 06:32:25.257205 1633651 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1216 06:32:25.257210 1633651 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1216 06:32:25.257218 1633651 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1216 06:32:25.257323 1633651 command_runner.go:130] > # nri_disable_connections = false
	I1216 06:32:25.257337 1633651 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1216 06:32:25.257342 1633651 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1216 06:32:25.257358 1633651 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1216 06:32:25.257370 1633651 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1216 06:32:25.257375 1633651 command_runner.go:130] > # NRI default validator configuration.
	I1216 06:32:25.257383 1633651 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1216 06:32:25.257393 1633651 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1216 06:32:25.257397 1633651 command_runner.go:130] > # can be restricted/rejected:
	I1216 06:32:25.257403 1633651 command_runner.go:130] > # - OCI hook injection
	I1216 06:32:25.257409 1633651 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1216 06:32:25.257417 1633651 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1216 06:32:25.257431 1633651 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1216 06:32:25.257443 1633651 command_runner.go:130] > # - adjustment of linux namespaces
	I1216 06:32:25.257465 1633651 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1216 06:32:25.257479 1633651 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1216 06:32:25.257485 1633651 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1216 06:32:25.257493 1633651 command_runner.go:130] > #
	I1216 06:32:25.257498 1633651 command_runner.go:130] > # [crio.nri.default_validator]
	I1216 06:32:25.257503 1633651 command_runner.go:130] > # nri_enable_default_validator = false
	I1216 06:32:25.257510 1633651 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1216 06:32:25.257516 1633651 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1216 06:32:25.257522 1633651 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1216 06:32:25.257549 1633651 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1216 06:32:25.257562 1633651 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1216 06:32:25.257568 1633651 command_runner.go:130] > # nri_validator_required_plugins = [
	I1216 06:32:25.257574 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.257593 1633651 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1216 06:32:25.257604 1633651 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1216 06:32:25.257609 1633651 command_runner.go:130] > [crio.stats]
	I1216 06:32:25.257639 1633651 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1216 06:32:25.257651 1633651 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1216 06:32:25.257655 1633651 command_runner.go:130] > # stats_collection_period = 0
	I1216 06:32:25.257662 1633651 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1216 06:32:25.257671 1633651 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1216 06:32:25.257675 1633651 command_runner.go:130] > # collection_period = 0
	I1216 06:32:25.259482 1633651 command_runner.go:130] ! time="2025-12-16T06:32:25.219727326Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1216 06:32:25.259512 1633651 command_runner.go:130] ! time="2025-12-16T06:32:25.219767515Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1216 06:32:25.259524 1633651 command_runner.go:130] ! time="2025-12-16T06:32:25.219798038Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1216 06:32:25.259536 1633651 command_runner.go:130] ! time="2025-12-16T06:32:25.219823548Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1216 06:32:25.259545 1633651 command_runner.go:130] ! time="2025-12-16T06:32:25.219901653Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:32:25.259556 1633651 command_runner.go:130] ! time="2025-12-16T06:32:25.220263616Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1216 06:32:25.259571 1633651 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1216 06:32:25.260036 1633651 cni.go:84] Creating CNI manager for ""
	I1216 06:32:25.260064 1633651 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 06:32:25.260092 1633651 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 06:32:25.260122 1633651 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-364120 NodeName:functional-364120 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 06:32:25.260297 1633651 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-364120"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 06:32:25.260383 1633651 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 06:32:25.268343 1633651 command_runner.go:130] > kubeadm
	I1216 06:32:25.268362 1633651 command_runner.go:130] > kubectl
	I1216 06:32:25.268366 1633651 command_runner.go:130] > kubelet
	I1216 06:32:25.268406 1633651 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 06:32:25.268462 1633651 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 06:32:25.276071 1633651 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1216 06:32:25.288575 1633651 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 06:32:25.300994 1633651 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1216 06:32:25.313670 1633651 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 06:32:25.317448 1633651 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1216 06:32:25.317550 1633651 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:32:25.453328 1633651 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:32:26.148228 1633651 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120 for IP: 192.168.49.2
	I1216 06:32:26.148252 1633651 certs.go:195] generating shared ca certs ...
	I1216 06:32:26.148269 1633651 certs.go:227] acquiring lock for ca certs: {Name:mkbf72d2e438185e2867d262e148d82e5455cccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:32:26.148410 1633651 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key
	I1216 06:32:26.148482 1633651 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key
	I1216 06:32:26.148493 1633651 certs.go:257] generating profile certs ...
	I1216 06:32:26.148601 1633651 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.key
	I1216 06:32:26.148663 1633651 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.key.a6be103a
	I1216 06:32:26.148727 1633651 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.key
	I1216 06:32:26.148740 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1216 06:32:26.148753 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1216 06:32:26.148765 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1216 06:32:26.148785 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1216 06:32:26.148802 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1216 06:32:26.148814 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1216 06:32:26.148830 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1216 06:32:26.148841 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1216 06:32:26.148892 1633651 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem (1338 bytes)
	W1216 06:32:26.148927 1633651 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255_empty.pem, impossibly tiny 0 bytes
	I1216 06:32:26.148935 1633651 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 06:32:26.148966 1633651 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem (1078 bytes)
	I1216 06:32:26.148996 1633651 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem (1123 bytes)
	I1216 06:32:26.149023 1633651 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem (1675 bytes)
	I1216 06:32:26.149078 1633651 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 06:32:26.149109 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> /usr/share/ca-certificates/15992552.pem
	I1216 06:32:26.149127 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:32:26.149143 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem -> /usr/share/ca-certificates/1599255.pem
	I1216 06:32:26.149727 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 06:32:26.167732 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 06:32:26.185872 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 06:32:26.203036 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 06:32:26.220347 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 06:32:26.238248 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 06:32:26.255572 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 06:32:26.272719 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 06:32:26.290975 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /usr/share/ca-certificates/15992552.pem (1708 bytes)
	I1216 06:32:26.308752 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 06:32:26.326261 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem --> /usr/share/ca-certificates/1599255.pem (1338 bytes)
	I1216 06:32:26.344085 1633651 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 06:32:26.357043 1633651 ssh_runner.go:195] Run: openssl version
	I1216 06:32:26.362895 1633651 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1216 06:32:26.363366 1633651 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15992552.pem
	I1216 06:32:26.370980 1633651 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15992552.pem /etc/ssl/certs/15992552.pem
	I1216 06:32:26.378519 1633651 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15992552.pem
	I1216 06:32:26.382213 1633651 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 16 06:24 /usr/share/ca-certificates/15992552.pem
	I1216 06:32:26.382261 1633651 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 06:24 /usr/share/ca-certificates/15992552.pem
	I1216 06:32:26.382313 1633651 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15992552.pem
	I1216 06:32:26.422786 1633651 command_runner.go:130] > 3ec20f2e
	I1216 06:32:26.423247 1633651 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 06:32:26.430703 1633651 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:32:26.437977 1633651 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 06:32:26.445376 1633651 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:32:26.449306 1633651 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 16 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:32:26.449352 1633651 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:32:26.449400 1633651 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:32:26.489732 1633651 command_runner.go:130] > b5213941
	I1216 06:32:26.490221 1633651 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 06:32:26.498231 1633651 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1599255.pem
	I1216 06:32:26.505778 1633651 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1599255.pem /etc/ssl/certs/1599255.pem
	I1216 06:32:26.513624 1633651 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1599255.pem
	I1216 06:32:26.517603 1633651 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 16 06:24 /usr/share/ca-certificates/1599255.pem
	I1216 06:32:26.517655 1633651 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 06:24 /usr/share/ca-certificates/1599255.pem
	I1216 06:32:26.517708 1633651 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1599255.pem
	I1216 06:32:26.558501 1633651 command_runner.go:130] > 51391683
	I1216 06:32:26.558962 1633651 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 06:32:26.566709 1633651 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 06:32:26.570687 1633651 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 06:32:26.570714 1633651 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1216 06:32:26.570721 1633651 command_runner.go:130] > Device: 259,1	Inode: 1064557     Links: 1
	I1216 06:32:26.570728 1633651 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1216 06:32:26.570734 1633651 command_runner.go:130] > Access: 2025-12-16 06:28:17.989070314 +0000
	I1216 06:32:26.570739 1633651 command_runner.go:130] > Modify: 2025-12-16 06:24:14.133380006 +0000
	I1216 06:32:26.570745 1633651 command_runner.go:130] > Change: 2025-12-16 06:24:14.133380006 +0000
	I1216 06:32:26.570750 1633651 command_runner.go:130] >  Birth: 2025-12-16 06:24:14.133380006 +0000
	I1216 06:32:26.570807 1633651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 06:32:26.611178 1633651 command_runner.go:130] > Certificate will not expire
	I1216 06:32:26.611643 1633651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 06:32:26.653044 1633651 command_runner.go:130] > Certificate will not expire
	I1216 06:32:26.653496 1633651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 06:32:26.693948 1633651 command_runner.go:130] > Certificate will not expire
	I1216 06:32:26.694452 1633651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 06:32:26.737177 1633651 command_runner.go:130] > Certificate will not expire
	I1216 06:32:26.737685 1633651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 06:32:26.777863 1633651 command_runner.go:130] > Certificate will not expire
	I1216 06:32:26.778315 1633651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 06:32:26.821770 1633651 command_runner.go:130] > Certificate will not expire
	I1216 06:32:26.822198 1633651 kubeadm.go:401] StartCluster: {Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:32:26.822282 1633651 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 06:32:26.822342 1633651 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 06:32:26.848560 1633651 cri.go:89] found id: ""
	I1216 06:32:26.848631 1633651 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 06:32:26.856311 1633651 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1216 06:32:26.856334 1633651 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1216 06:32:26.856341 1633651 command_runner.go:130] > /var/lib/minikube/etcd:
	I1216 06:32:26.856353 1633651 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 06:32:26.856377 1633651 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 06:32:26.856451 1633651 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 06:32:26.863716 1633651 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 06:32:26.864139 1633651 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-364120" does not appear in /home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:32:26.864257 1633651 kubeconfig.go:62] /home/jenkins/minikube-integration/22141-1596013/kubeconfig needs updating (will repair): [kubeconfig missing "functional-364120" cluster setting kubeconfig missing "functional-364120" context setting]
	I1216 06:32:26.864570 1633651 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/kubeconfig: {Name:mk61a8e87d869d27c5acc78145bae6b02a8088a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:32:26.865235 1633651 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:32:26.865467 1633651 kapi.go:59] client config for functional-364120: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.crt", KeyFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.key", CAFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 06:32:26.866570 1633651 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1216 06:32:26.866631 1633651 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1216 06:32:26.866668 1633651 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 06:32:26.866693 1633651 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1216 06:32:26.866720 1633651 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 06:32:26.867179 1633651 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 06:32:26.868151 1633651 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1216 06:32:26.877051 1633651 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1216 06:32:26.877090 1633651 kubeadm.go:602] duration metric: took 20.700092ms to restartPrimaryControlPlane
	I1216 06:32:26.877101 1633651 kubeadm.go:403] duration metric: took 54.908954ms to StartCluster
	I1216 06:32:26.877118 1633651 settings.go:142] acquiring lock: {Name:mk011eec7aa10b3db81dce3dc7edf51f985e2ce2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:32:26.877187 1633651 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:32:26.877859 1633651 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/kubeconfig: {Name:mk61a8e87d869d27c5acc78145bae6b02a8088a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:32:26.878064 1633651 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 06:32:26.878625 1633651 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 06:32:26.878682 1633651 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 06:32:26.878749 1633651 addons.go:70] Setting storage-provisioner=true in profile "functional-364120"
	I1216 06:32:26.878762 1633651 addons.go:239] Setting addon storage-provisioner=true in "functional-364120"
	I1216 06:32:26.878787 1633651 host.go:66] Checking if "functional-364120" exists ...
	I1216 06:32:26.879288 1633651 cli_runner.go:164] Run: docker container inspect functional-364120 --format={{.State.Status}}
	I1216 06:32:26.879473 1633651 addons.go:70] Setting default-storageclass=true in profile "functional-364120"
	I1216 06:32:26.879497 1633651 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-364120"
	I1216 06:32:26.879803 1633651 cli_runner.go:164] Run: docker container inspect functional-364120 --format={{.State.Status}}
	I1216 06:32:26.884633 1633651 out.go:179] * Verifying Kubernetes components...
	I1216 06:32:26.887314 1633651 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:32:26.918200 1633651 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 06:32:26.919874 1633651 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:32:26.920155 1633651 kapi.go:59] client config for functional-364120: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.crt", KeyFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.key", CAFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 06:32:26.920453 1633651 addons.go:239] Setting addon default-storageclass=true in "functional-364120"
	I1216 06:32:26.920538 1633651 host.go:66] Checking if "functional-364120" exists ...
	I1216 06:32:26.920986 1633651 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:26.921004 1633651 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 06:32:26.921061 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:26.921340 1633651 cli_runner.go:164] Run: docker container inspect functional-364120 --format={{.State.Status}}
	I1216 06:32:26.964659 1633651 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:26.964697 1633651 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 06:32:26.964756 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:26.965286 1633651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:32:26.998084 1633651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:32:27.098293 1633651 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:32:27.125997 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:27.132422 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:27.897996 1633651 node_ready.go:35] waiting up to 6m0s for node "functional-364120" to be "Ready" ...
	I1216 06:32:27.898129 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:27.898194 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:27.898417 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:27.898455 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:27.898484 1633651 retry.go:31] will retry after 293.203887ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:27.898523 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:27.898548 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:27.898555 1633651 retry.go:31] will retry after 361.667439ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:27.898617 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:28.192028 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:28.251245 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:28.251292 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:28.251318 1633651 retry.go:31] will retry after 421.770055ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:28.261399 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:28.326104 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:28.326166 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:28.326190 1633651 retry.go:31] will retry after 230.03946ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:28.398272 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:28.398369 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:28.398664 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:28.557150 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:28.610627 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:28.614370 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:28.614405 1633651 retry.go:31] will retry after 431.515922ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:28.673577 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:28.751124 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:28.751167 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:28.751187 1633651 retry.go:31] will retry after 416.921651ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:28.898406 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:28.898526 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:28.898876 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:29.046157 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:29.107254 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:29.107314 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:29.107371 1633651 retry.go:31] will retry after 899.303578ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:29.168518 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:29.225793 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:29.229337 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:29.229371 1633651 retry.go:31] will retry after 758.152445ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:29.398643 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:29.398767 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:29.399082 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:29.898862 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:29.898939 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:29.899317 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:29.899390 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:29.988648 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:30.011610 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:30.113177 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:30.113245 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:30.113269 1633651 retry.go:31] will retry after 739.984539ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:30.134431 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:30.134488 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:30.134525 1633651 retry.go:31] will retry after 743.078754ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:30.398873 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:30.398944 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:30.399345 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:30.854128 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:30.878717 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:30.899202 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:30.899283 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:30.899567 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:30.948589 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:30.948629 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:30.948651 1633651 retry.go:31] will retry after 2.54132752s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:30.989038 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:30.989082 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:30.989107 1633651 retry.go:31] will retry after 1.925489798s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:31.398656 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:31.398729 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:31.399083 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:31.898637 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:31.898714 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:31.899058 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:32.398954 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:32.399038 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:32.399384 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:32.399469 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:32.898198 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:32.898298 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:32.898691 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:32.914948 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:32.974729 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:32.974766 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:32.974784 1633651 retry.go:31] will retry after 2.13279976s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:33.398213 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:33.398308 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:33.398682 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:33.491042 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:33.546485 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:33.550699 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:33.550734 1633651 retry.go:31] will retry after 1.927615537s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:33.899219 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:33.899329 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:33.899638 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:34.398293 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:34.398367 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:34.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:34.898296 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:34.898376 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:34.898683 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:34.898732 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:35.108136 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:35.168080 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:35.168179 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:35.168237 1633651 retry.go:31] will retry after 2.609957821s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:35.398216 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:35.398310 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:35.398589 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:35.478854 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:35.539410 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:35.539453 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:35.539472 1633651 retry.go:31] will retry after 2.66810674s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:35.898940 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:35.899019 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:35.899395 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:36.399231 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:36.399312 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:36.399638 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:36.898470 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:36.898542 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:36.898811 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:36.898864 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:37.398807 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:37.398884 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:37.399243 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:37.778747 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:37.833515 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:37.837237 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:37.837278 1633651 retry.go:31] will retry after 4.537651284s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:37.898560 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:37.898639 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:37.898976 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:38.208455 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:38.268308 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:38.268354 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:38.268373 1633651 retry.go:31] will retry after 8.612374195s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:38.398733 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:38.398807 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:38.399077 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:38.899000 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:38.899085 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:38.899556 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:38.899628 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:39.398306 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:39.398389 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:39.398769 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:39.898353 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:39.898421 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:39.898737 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:40.398303 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:40.398378 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:40.398718 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:40.898499 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:40.898578 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:40.898878 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:41.398243 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:41.398320 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:41.398608 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:41.398654 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:41.898265 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:41.898352 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:41.898706 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:42.375464 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:42.399185 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:42.399260 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:42.399531 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:42.439480 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:42.439520 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:42.439538 1633651 retry.go:31] will retry after 13.723834965s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:42.899110 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:42.899183 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:42.899457 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:43.398171 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:43.398246 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:43.398594 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:43.898302 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:43.898384 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:43.898716 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:43.898766 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:44.398246 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:44.398336 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:44.398652 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:44.898379 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:44.898453 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:44.898773 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:45.398303 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:45.398383 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:45.398795 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:45.898225 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:45.898296 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:45.898604 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:46.398309 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:46.398384 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:46.398732 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:46.398787 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:46.881536 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:46.898964 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:46.899056 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:46.899361 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:46.940375 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:46.943961 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:46.943995 1633651 retry.go:31] will retry after 5.072276608s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:47.398701 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:47.398787 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:47.399064 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:47.898839 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:47.898914 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:47.899236 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:48.398915 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:48.398993 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:48.399340 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:48.399397 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:48.898996 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:48.899069 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:48.899401 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:49.399214 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:49.399301 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:49.399707 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:49.898281 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:49.898365 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:49.898709 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:50.398392 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:50.398466 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:50.398735 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:50.898279 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:50.898378 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:50.898713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:50.898770 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:51.398286 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:51.398367 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:51.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:51.898253 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:51.898327 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:51.898592 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:52.017198 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:52.080330 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:52.080367 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:52.080387 1633651 retry.go:31] will retry after 19.488213597s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:52.398170 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:52.398254 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:52.398603 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:52.898357 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:52.898430 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:52.898751 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:52.898809 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:53.398443 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:53.398509 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:53.398780 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:53.898306 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:53.898387 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:53.898746 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:54.398455 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:54.398531 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:54.398859 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:54.898536 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:54.898616 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:54.898937 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:54.899000 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:55.398275 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:55.398355 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:55.398711 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:55.898280 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:55.898356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:55.898712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:56.164267 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:56.225232 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:56.225280 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:56.225300 1633651 retry.go:31] will retry after 14.108855756s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:56.398529 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:56.398594 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:56.398865 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:56.898855 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:56.898932 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:56.899282 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:56.899334 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:57.399213 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:57.399288 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:57.399591 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:57.898226 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:57.898296 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:57.898568 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:58.398287 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:58.398378 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:58.398747 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:58.898457 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:58.898545 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:58.898936 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:59.398231 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:59.398328 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:59.398650 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:59.398702 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:59.898313 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:59.898388 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:59.898742 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:00.398460 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:00.398541 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:00.398851 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:00.898739 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:00.898816 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:00.899097 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:01.398863 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:01.398936 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:01.399252 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:01.399305 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:01.898923 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:01.899005 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:01.899364 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:02.399175 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:02.399247 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:02.399610 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:02.898189 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:02.898266 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:02.898584 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:03.398333 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:03.398410 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:03.398779 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:03.898460 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:03.898527 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:03.898800 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:03.898847 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:04.398287 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:04.398376 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:04.398745 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:04.898458 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:04.898534 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:04.898848 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:05.398531 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:05.398614 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:05.398881 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:05.898633 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:05.898709 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:05.899055 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:05.899137 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:06.398909 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:06.398987 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:06.399357 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:06.898176 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:06.898262 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:06.898675 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:07.398306 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:07.398386 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:07.398760 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:07.898344 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:07.898420 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:07.898721 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:08.398282 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:08.398349 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:08.398667 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:08.398725 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:08.898267 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:08.898349 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:08.898696 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:09.398398 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:09.398479 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:09.398785 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:09.898336 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:09.898404 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:09.898666 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:10.335122 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:33:10.396460 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:33:10.396519 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:33:10.396538 1633651 retry.go:31] will retry after 12.344116424s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:33:10.398561 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:10.398627 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:10.398890 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:10.398937 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:10.898605 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:10.898693 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:10.899053 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:11.398802 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:11.398885 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:11.399176 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:11.569711 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:33:11.631078 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:33:11.634606 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:33:11.634637 1633651 retry.go:31] will retry after 14.712851021s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:33:11.899031 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:11.899113 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:11.899432 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:12.398254 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:12.398360 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:12.398690 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:12.898240 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:12.898312 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:12.898566 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:12.898607 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:13.398287 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:13.398402 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:13.398698 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:13.898274 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:13.898358 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:13.898708 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:14.398404 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:14.398483 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:14.398747 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:14.898318 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:14.898393 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:14.898689 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:14.898742 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:15.398247 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:15.398323 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:15.398677 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:15.898226 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:15.898327 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:15.898637 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:16.398242 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:16.398318 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:16.398644 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:16.898635 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:16.898716 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:16.899100 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:16.899164 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:17.398918 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:17.399005 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:17.399287 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:17.899071 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:17.899230 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:17.899613 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:18.398204 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:18.398291 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:18.398652 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:18.898350 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:18.898425 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:18.898684 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:19.398280 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:19.398356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:19.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:19.398764 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:19.898239 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:19.898318 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:19.898648 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:20.398232 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:20.398306 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:20.398616 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:20.898284 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:20.898360 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:20.898678 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:21.398294 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:21.398375 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:21.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:21.898275 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:21.898388 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:21.898665 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:21.898722 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:22.398602 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:22.398676 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:22.399053 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:22.741700 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:33:22.805176 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:33:22.805212 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:33:22.805230 1633651 retry.go:31] will retry after 37.521073757s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:33:22.898475 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:22.898570 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:22.898876 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:23.398233 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:23.398311 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:23.398648 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:23.898274 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:23.898357 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:23.898694 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:23.898753 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:24.398440 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:24.398517 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:24.398859 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:24.898547 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:24.898618 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:24.898926 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:25.398269 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:25.398343 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:25.398672 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:25.898264 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:25.898341 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:25.898639 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:26.348396 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:33:26.398844 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:26.398921 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:26.399279 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:26.399329 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:26.417393 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:33:26.417436 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:33:26.417455 1633651 retry.go:31] will retry after 31.35447413s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:33:26.898149 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:26.898223 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:26.898585 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:27.398341 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:27.398414 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:27.398760 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:27.898330 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:27.898422 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:27.898845 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:28.398266 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:28.398345 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:28.398712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:28.898417 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:28.898496 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:28.898819 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:28.898872 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:29.398235 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:29.398307 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:29.398632 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:29.898239 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:29.898320 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:29.898683 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:30.398392 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:30.398475 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:30.398830 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:30.898474 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:30.898549 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:30.898811 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:31.398256 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:31.398330 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:31.398672 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:31.398725 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:31.898251 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:31.898324 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:31.898636 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:32.398372 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:32.398442 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:32.398727 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:32.898400 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:32.898485 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:32.898850 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:33.398289 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:33.398371 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:33.398711 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:33.398769 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:33.898438 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:33.898505 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:33.898773 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:34.398438 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:34.398516 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:34.398867 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:34.898456 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:34.898537 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:34.898909 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:35.398591 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:35.398658 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:35.398916 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:35.398977 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:35.898278 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:35.898358 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:35.898703 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:36.398279 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:36.398364 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:36.398729 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:36.898728 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:36.898803 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:36.899137 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:37.399202 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:37.399278 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:37.399639 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:37.399694 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:37.898374 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:37.898455 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:37.898821 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:38.398505 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:38.398571 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:38.398855 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:38.898265 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:38.898344 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:38.898677 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:39.398411 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:39.398486 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:39.398839 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:39.898222 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:39.898300 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:39.898615 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:39.898667 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:40.398263 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:40.398339 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:40.398681 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:40.898277 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:40.898359 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:40.898740 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:41.398462 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:41.398529 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:41.398809 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:41.898281 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:41.898354 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:41.898706 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:41.898766 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:42.398755 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:42.398839 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:42.399236 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:42.898983 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:42.899053 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:42.899331 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:43.398183 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:43.398258 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:43.398591 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:43.898308 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:43.898391 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:43.898742 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:44.398253 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:44.398321 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:44.398580 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:44.398622 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:44.898252 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:44.898325 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:44.898659 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:45.398342 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:45.398448 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:45.398787 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:45.898225 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:45.898296 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:45.898628 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:46.398273 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:46.398350 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:46.398686 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:46.398739 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:46.898513 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:46.898594 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:46.898959 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:47.398772 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:47.398859 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:47.399168 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:47.898938 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:47.899012 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:47.899377 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:48.399044 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:48.399126 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:48.399458 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:48.399514 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:48.898185 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:48.898255 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:48.898520 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:49.398231 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:49.398311 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:49.398630 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:49.898360 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:49.898434 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:49.898761 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:50.398248 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:50.398333 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:50.398656 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:50.898256 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:50.898329 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:50.898694 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:50.898756 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:51.398426 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:51.398503 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:51.398913 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:51.898663 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:51.898743 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:51.899196 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:52.398565 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:52.398648 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:52.399111 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:52.898692 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:52.898773 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:52.899132 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:52.899190 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:53.398951 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:53.399065 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:53.399370 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:53.898173 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:53.898248 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:53.898623 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:54.398283 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:54.398370 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:54.398682 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:54.898239 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:54.898312 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:54.898573 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:55.398246 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:55.398320 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:55.398650 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:55.398707 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:55.898264 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:55.898343 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:55.898683 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:56.398258 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:56.398333 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:56.398586 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:56.898628 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:56.898703 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:56.899073 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:57.398945 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:57.399019 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:57.399371 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:57.399427 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:57.772952 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:33:57.834039 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:33:57.837641 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:33:57.837741 1633651 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 06:33:57.899083 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:57.899158 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:57.899422 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:58.398161 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:58.398242 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:58.398586 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:58.898310 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:58.898386 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:58.898742 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:59.398422 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:59.398493 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:59.398756 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:59.898258 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:59.898331 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:59.898686 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:59.898740 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:00.327789 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:34:00.398990 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:00.399071 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:00.399382 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:00.427909 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:34:00.431971 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:34:00.432103 1633651 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 06:34:00.437092 1633651 out.go:179] * Enabled addons: 
	I1216 06:34:00.440884 1633651 addons.go:530] duration metric: took 1m33.562192947s for enable addons: enabled=[]
	I1216 06:34:00.898292 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:00.898392 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:00.898707 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:01.398307 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:01.398389 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:01.398711 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:01.898244 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:01.898311 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:01.898577 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:02.398409 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:02.398488 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:02.398818 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:02.398876 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:02.898375 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:02.898452 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:02.898792 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:03.398249 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:03.398319 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:03.398577 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:03.898262 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:03.898340 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:03.898676 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:04.398263 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:04.398335 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:04.398654 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:04.898325 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:04.898400 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:04.898742 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:04.898801 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:05.398291 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:05.398382 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:05.398957 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:05.898686 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:05.898768 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:05.899122 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:06.398925 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:06.399010 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:06.399354 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:06.898972 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:06.899043 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:06.899401 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:06.899475 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:07.399211 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:07.399289 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:07.399665 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:07.898337 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:07.898421 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:07.898704 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:08.398265 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:08.398348 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:08.398682 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:08.898384 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:08.898460 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:08.898748 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:09.399015 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:09.399090 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:09.399360 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:09.399412 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:09.899197 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:09.899275 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:09.899628 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:10.398251 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:10.398324 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:10.398662 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:10.898348 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:10.898422 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:10.898716 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:11.398281 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:11.398362 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:11.398704 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:11.898290 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:11.898370 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:11.898687 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:11.898743 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:12.398541 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:12.398609 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:12.398881 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:12.898637 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:12.898723 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:12.899079 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:13.398865 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:13.398945 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:13.399273 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:13.899072 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:13.899151 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:13.899501 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:13.899561 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:14.398235 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:14.398316 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:14.398658 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:14.898363 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:14.898442 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:14.898813 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:15.398508 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:15.398583 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:15.398859 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:15.898280 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:15.898359 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:15.898662 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:16.398298 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:16.398373 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:16.398713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:16.398775 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:16.898203 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:16.898272 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:16.898528 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:17.398515 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:17.398598 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:17.398936 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:17.898289 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:17.898362 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:17.898713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:18.398422 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:18.398498 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:18.398771 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:18.398820 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:18.898251 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:18.898331 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:18.898653 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:19.398357 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:19.398446 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:19.398791 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:19.898510 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:19.898589 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:19.898872 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:20.398266 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:20.398359 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:20.398763 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:20.898254 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:20.898340 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:20.898695 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:20.898758 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:21.398239 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:21.398316 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:21.398590 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:21.898277 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:21.898350 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:21.898851 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:22.398811 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:22.398886 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:22.399204 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:22.898972 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:22.899048 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:22.899306 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:22.899351 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:23.399107 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:23.399181 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:23.399518 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:23.898252 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:23.898332 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:23.898659 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:24.398280 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:24.398364 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:24.398714 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:24.898279 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:24.898358 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:24.898719 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:25.398435 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:25.398518 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:25.398899 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:25.398964 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:25.898643 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:25.898718 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:25.898991 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:26.398257 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:26.398331 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:26.398659 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:26.898526 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:26.898612 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:26.899075 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:27.398249 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:27.398364 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:27.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:27.898275 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:27.898350 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:27.898713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:27.898798 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:28.398464 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:28.398539 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:28.398917 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:28.898624 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:28.898699 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:28.899014 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:29.398802 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:29.398878 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:29.399221 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:29.898995 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:29.899075 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:29.899431 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:29.899497 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:30.398215 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:30.398295 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:30.398549 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:30.898232 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:30.898309 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:30.898674 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:31.398411 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:31.398493 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:31.398835 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:31.898249 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:31.898315 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:31.898624 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:32.398296 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:32.398371 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:32.398696 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:32.398762 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:32.898447 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:32.898526 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:32.898844 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:33.398245 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:33.398318 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:33.398582 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:33.898259 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:33.898355 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:33.898652 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:34.398311 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:34.398386 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:34.398737 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:34.398791 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:34.898273 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:34.898347 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:34.898671 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:35.398244 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:35.398321 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:35.398665 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:35.898264 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:35.898348 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:35.898663 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:36.398349 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:36.398430 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:36.398756 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:36.898879 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:36.898962 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:36.899298 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:36.899363 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:37.398940 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:37.399018 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:37.399339 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:37.899128 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:37.899202 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:37.899475 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:38.398196 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:38.398276 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:38.398617 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:38.898346 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:38.898424 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:38.898788 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:39.398232 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:39.398304 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:39.398637 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:39.398705 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:39.898341 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:39.898419 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:39.898791 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:40.398499 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:40.398574 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:40.398963 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:40.898635 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:40.898719 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:40.899009 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:41.398866 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:41.398958 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:41.399281 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:41.399336 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:41.899108 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:41.899190 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:41.899541 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:42.398226 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:42.398314 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:42.398588 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:42.898199 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:42.898320 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:42.898686 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:43.398433 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:43.398510 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:43.398874 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:43.898570 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:43.898642 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:43.898913 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:43.898966 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:44.398296 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:44.398371 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:44.398701 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:44.898279 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:44.898356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:44.898693 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:45.398553 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:45.398755 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:45.399042 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:45.898881 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:45.898964 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:45.899318 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:45.899373 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:46.399167 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:46.399253 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:46.399612 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:46.898505 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:46.898584 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:46.898871 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:47.399034 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:47.399118 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:47.399524 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:47.898288 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:47.898367 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:47.898724 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:48.398399 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:48.398476 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:48.398811 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:48.398865 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:48.898261 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:48.898347 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:48.898763 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:49.398479 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:49.398552 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:49.398921 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:49.898219 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:49.898296 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:49.898632 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:50.398260 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:50.398336 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:50.398681 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:50.898398 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:50.898476 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:50.898813 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:50.898869 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:51.398266 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:51.398349 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:51.398666 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:51.898271 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:51.898358 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:51.898714 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:52.398270 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:52.398364 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:52.398695 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:52.898254 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:52.898333 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:52.898667 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:53.398349 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:53.398437 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:53.398787 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:53.398846 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:53.898285 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:53.898362 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:53.898735 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:54.398284 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:54.398352 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:54.398640 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:54.898273 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:54.898356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:54.898672 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:55.398276 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:55.398360 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:55.398754 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:55.898236 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:55.898309 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:55.898640 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:55.898694 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:56.398347 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:56.398429 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:56.398783 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:56.898669 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:56.898747 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:56.899097 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:57.399054 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:57.399128 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:57.399397 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:57.898166 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:57.898252 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:57.898582 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:58.398281 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:58.398356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:58.398693 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:58.398750 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:58.898237 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:58.898341 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:58.898734 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:59.398285 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:59.398370 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:59.398734 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:59.898309 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:59.898391 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:59.898719 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:00.414820 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:00.414906 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:00.415201 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:00.415247 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:00.899080 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:00.899160 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:00.899488 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:01.398203 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:01.398286 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:01.398658 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:01.898381 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:01.898453 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:01.898741 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:02.398760 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:02.398842 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:02.399198 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:02.898874 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:02.898953 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:02.899310 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:02.899364 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:03.399127 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:03.399199 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:03.399477 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:03.898183 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:03.898263 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:03.898574 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:04.398285 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:04.398363 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:04.398713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:04.898409 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:04.898488 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:04.898770 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:05.398281 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:05.398358 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:05.398689 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:05.398747 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:05.898283 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:05.898360 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:05.898696 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:06.398276 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:06.398344 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:06.398628 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:06.898700 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:06.898789 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:06.899156 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:07.399150 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:07.399230 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:07.399559 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:07.399618 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:07.898272 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:07.898347 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:07.898685 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:08.398266 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:08.398340 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:08.398691 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:08.898270 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:08.898346 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:08.898712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:09.398403 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:09.398475 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:09.398741 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:09.898260 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:09.898336 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:09.898699 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:09.898756 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:10.398423 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:10.398500 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:10.398892 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:10.898626 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:10.898722 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:10.899103 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:11.398911 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:11.399006 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:11.399384 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:11.898151 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:11.898224 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:11.898573 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:12.398258 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:12.398328 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:12.398616 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:12.398695 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:12.898253 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:12.898331 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:12.898683 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:13.398383 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:13.398463 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:13.398838 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:13.898531 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:13.898612 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:13.898894 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:14.398278 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:14.398350 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:14.398713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:14.398765 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:14.898300 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:14.898380 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:14.898747 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:15.398438 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:15.398508 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:15.398778 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:15.898259 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:15.898333 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:15.898664 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:16.398285 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:16.398368 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:16.398712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:16.898532 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:16.898606 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:16.898878 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:16.898924 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:17.398589 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:17.398661 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:17.398959 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:17.898673 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:17.898753 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:17.899078 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:18.398855 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:18.398925 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:18.399198 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:18.898973 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:18.899048 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:18.899383 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:18.899438 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:19.399095 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:19.399174 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:19.399532 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:19.898245 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:19.898323 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:19.898607 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:20.398269 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:20.398343 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:20.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:20.898294 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:20.898374 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:20.898722 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:21.398418 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:21.398486 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:21.398764 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:21.398806 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:21.898283 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:21.898369 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:21.898714 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:22.398289 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:22.398365 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:22.398713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:22.898223 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:22.898294 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:22.898644 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:23.398337 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:23.398411 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:23.398738 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:23.898470 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:23.898573 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:23.898929 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:23.898986 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:24.398626 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:24.398696 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:24.398974 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:24.898302 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:24.898387 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:24.898855 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:25.398279 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:25.398361 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:25.398718 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:25.898396 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:25.898463 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:25.898752 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:26.398331 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:26.398440 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:26.398776 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:26.398836 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:26.898830 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:26.898904 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:26.899295 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:27.399107 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:27.399188 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:27.399497 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:27.898182 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:27.898260 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:27.898590 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:28.398292 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:28.398373 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:28.398733 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:28.898316 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:28.898394 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:28.898672 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:28.898717 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:29.398311 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:29.398408 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:29.398849 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:29.898311 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:29.898399 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:29.898772 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:30.398270 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:30.398340 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:30.398653 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:30.898259 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:30.898340 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:30.898667 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:31.398286 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:31.398365 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:31.398695 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:31.398752 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:31.898305 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:31.898393 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:31.898777 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:32.398810 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:32.398883 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:32.399201 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:32.899041 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:32.899121 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:32.899453 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:33.398148 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:33.398223 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:33.398492 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:33.898278 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:33.898353 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:33.898736 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:33.898787 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:34.398447 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:34.398528 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:34.398873 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:34.898221 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:34.898312 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:34.898605 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:35.398303 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:35.398382 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:35.398734 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:35.898472 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:35.898554 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:35.898882 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:35.898940 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:36.398373 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:36.398454 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:36.398749 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:36.898854 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:36.898926 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:36.899222 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:37.398175 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:37.398272 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:37.398626 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:37.898231 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:37.898296 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:37.898642 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:38.398279 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:38.398350 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:38.398708 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:38.398766 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:38.898476 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:38.898554 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:38.898890 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:39.398379 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:39.398485 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:39.398800 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:39.898316 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:39.898391 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:39.898769 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:40.398507 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:40.398604 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:40.398907 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:40.398953 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:40.898245 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:40.898335 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:40.898635 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:41.398325 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:41.398402 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:41.398863 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:41.898282 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:41.898365 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:41.898709 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:42.398319 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:42.398385 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:42.398670 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:42.898305 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:42.898377 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:42.898704 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:42.898763 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:43.398283 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:43.398356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:43.398701 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:43.898384 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:43.898461 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:43.898733 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:44.398250 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:44.398345 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:44.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:44.898255 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:44.898335 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:44.898712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:45.398244 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:45.398321 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:45.398663 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:45.398717 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:45.898312 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:45.898398 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:45.898773 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:46.398512 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:46.398593 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:46.398928 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:46.898755 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:46.898837 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:46.899103 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:47.399074 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:47.399155 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:47.399470 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:47.399520 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:47.898232 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:47.898309 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:47.898683 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:48.398447 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:48.398547 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:48.398895 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:48.898281 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:48.898356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:48.898703 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:49.398425 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:49.398500 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:49.398876 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:49.898573 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:49.898645 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:49.899024 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:49.899073 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:50.398808 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:50.398884 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:50.399215 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:50.898894 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:50.898974 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:50.899314 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:51.399073 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:51.399145 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:51.399405 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:51.899204 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:51.899286 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:51.899637 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:51.899692 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:52.398394 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:52.398470 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:52.398814 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:52.898245 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:52.898334 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:52.898628 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:53.398281 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:53.398365 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:53.398736 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:53.898467 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:53.898549 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:53.898914 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:54.398587 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:54.398670 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:54.398930 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:54.398971 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:54.898283 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:54.898362 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:54.898706 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:55.398429 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:55.398501 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:55.398821 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:55.898239 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:55.898308 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:55.898643 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:56.398282 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:56.398367 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:56.398726 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:56.898588 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:56.898668 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:56.899021 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:56.899088 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:57.398828 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:57.398910 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:57.399188 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:57.898996 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:57.899073 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:57.899382 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:58.399133 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:58.399235 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:58.399594 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:58.898219 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:58.898452 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:58.898861 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:59.398261 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:59.398348 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:59.398686 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:59.398752 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:59.898268 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:59.898357 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:59.898715 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:00.399357 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:00.399435 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:00.399772 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:00.898475 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:00.898558 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:00.898912 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:01.398629 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:01.398704 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:01.399062 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:01.399123 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:01.898881 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:01.898960 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:01.899233 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:02.399234 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:02.399313 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:02.399704 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:02.898296 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:02.898382 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:02.898715 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:03.398263 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:03.398346 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:03.398641 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:03.898276 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:03.898359 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:03.898695 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:03.898751 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:04.398291 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:04.398413 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:04.398743 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:04.898440 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:04.898518 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:04.898790 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:05.398493 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:05.398570 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:05.398895 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:05.898635 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:05.898712 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:05.899049 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:05.899102 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:06.398845 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:06.398927 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:06.399275 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:06.899212 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:06.899287 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:06.899619 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:07.398278 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:07.398388 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:07.398739 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:07.898423 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:07.898501 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:07.898769 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:08.398271 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:08.398361 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:08.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:08.398759 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:08.898430 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:08.898507 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:08.898855 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:09.398214 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:09.398290 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:09.398601 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:09.898266 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:09.898350 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:09.898705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:10.398295 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:10.398377 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:10.398707 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:10.898349 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:10.898425 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:10.898702 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:10.898757 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:11.398292 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:11.398366 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:11.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:11.898435 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:11.898509 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:11.898839 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:12.398738 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:12.398804 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:12.399069 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:12.898825 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:12.898900 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:12.899217 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:12.899278 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:13.399064 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:13.399138 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:13.399479 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:13.898174 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:13.898254 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:13.898539 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:14.398296 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:14.398371 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:14.398712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:14.898437 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:14.898518 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:14.898877 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:15.398539 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:15.398617 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:15.398894 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:15.398947 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:15.898294 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:15.898402 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:15.898784 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:16.398330 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:16.398408 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:16.398731 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:16.898535 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:16.898609 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:16.898886 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:17.398882 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:17.398955 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:17.399291 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:17.399351 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:17.899139 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:17.899220 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:17.899551 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:18.398232 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:18.398362 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:18.398620 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:18.898277 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:18.898354 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:18.898649 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:19.398247 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:19.398346 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:19.398683 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:19.898387 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:19.898473 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:19.898758 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:19.898804 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:20.398296 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:20.398369 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:20.398689 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:20.898334 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:20.898415 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:20.898762 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:21.398456 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:21.398532 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:21.398795 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:21.898309 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:21.898383 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:21.898723 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:22.398748 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:22.398819 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:22.399287 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:22.399332 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:22.899045 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:22.899124 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:22.899438 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:23.398179 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:23.398299 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:23.398688 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:23.898298 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:23.898382 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:23.898729 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:24.398222 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:24.398296 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:24.398629 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:24.898320 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:24.898394 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:24.898747 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:24.898810 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:25.398296 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:25.398380 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:25.398720 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:25.898403 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:25.898472 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:25.898736 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:26.398281 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:26.398355 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:26.398676 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:26.898649 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:26.898727 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:26.899069 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:26.899125 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:27.398556 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:27.398654 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:27.398964 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:27.898756 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:27.898845 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:27.899194 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:28.398978 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:28.399057 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:28.399387 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:28.899171 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:28.899242 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:28.899511 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:28.899553 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:29.398265 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:29.398345 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:29.398698 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:29.898282 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:29.898357 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:29.898723 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:30.398293 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:30.398372 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:30.398656 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:30.898379 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:30.898467 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:30.898858 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:31.398431 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:31.398506 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:31.398844 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:31.398900 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:31.898545 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:31.898622 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:31.898916 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:32.398834 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:32.398911 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:32.399252 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:32.899021 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:32.899098 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:32.899424 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:33.398133 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:33.398202 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:33.398473 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:33.898147 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:33.898235 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:33.898584 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:33.898642 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:34.398163 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:34.398242 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:34.398591 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:34.898191 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:34.898275 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:34.898568 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:35.398271 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:35.398363 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:35.398707 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:35.898320 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:35.898407 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:35.898755 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:35.898810 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:36.398446 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:36.398521 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:36.398786 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:36.898729 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:36.898812 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:36.899129 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:37.399112 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:37.399185 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:37.399511 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:37.898225 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:37.898304 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:37.898568 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:38.398267 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:38.398343 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:38.398710 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:38.398764 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:38.898279 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:38.898353 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:38.898729 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:39.398240 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:39.398351 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:39.398667 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:39.898287 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:39.898369 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:39.898673 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:40.398360 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:40.398435 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:40.398766 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:40.398819 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:40.898241 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:40.898314 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:40.898637 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:41.398298 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:41.398376 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:41.398683 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:41.898412 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:41.898487 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:41.898821 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:42.398242 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:42.398318 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:42.398580 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:42.898280 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:42.898355 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:42.898692 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:42.898748 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:43.398416 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:43.398491 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:43.398846 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:43.898235 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:43.898329 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:43.898615 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:44.398291 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:44.398366 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:44.398722 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:44.898411 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:44.898483 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:44.898775 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:44.898824 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:45.398248 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:45.398345 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:45.398675 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:45.898365 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:45.898459 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:45.898837 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:46.398280 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:46.398361 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:46.398716 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:46.898502 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:46.898576 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:46.898840 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:46.898879 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:47.398781 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:47.398852 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:47.399176 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:47.898950 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:47.899024 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:47.899371 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:48.399121 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:48.399194 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:48.399456 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:48.899245 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:48.899322 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:48.899641 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:48.899693 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:49.398288 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:49.398370 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:49.398748 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:49.898250 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:49.898327 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:49.898652 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:50.398271 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:50.398347 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:50.398703 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:50.898421 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:50.898500 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:50.898849 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:51.398536 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:51.398624 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:51.398900 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:51.398944 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:51.898273 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:51.898353 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:51.898689 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:52.398314 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:52.398399 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:52.398737 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:52.898253 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:52.898349 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:52.898676 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:53.398281 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:53.398352 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:53.398708 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:53.898301 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:53.898380 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:53.898717 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:53.898780 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:54.398263 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:54.398358 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:54.398690 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:54.898290 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:54.898368 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:54.898745 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:55.398460 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:55.398541 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:55.398872 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:55.898241 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:55.898317 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:55.898573 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:56.398264 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:56.398341 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:56.398665 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:56.398721 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:56.898737 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:56.898816 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:56.899137 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:57.399000 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:57.399068 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:57.399335 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:57.899058 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:57.899134 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:57.899469 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:58.398223 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:58.398317 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:58.398690 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:58.398749 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:58.898385 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:58.898460 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:58.898722 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:59.398260 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:59.398337 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:59.398712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:59.898268 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:59.898343 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:59.898667 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:00.398403 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:00.398481 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:00.398778 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:00.398824 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:00.898298 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:00.898373 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:00.898697 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:01.398432 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:01.398511 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:01.398865 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:01.898261 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:01.898333 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:01.898600 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:02.398363 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:02.398458 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:02.398848 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:02.398903 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:02.898598 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:02.898677 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:02.899033 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:03.398801 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:03.398882 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:03.399146 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:03.898939 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:03.899014 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:03.899351 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:04.399028 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:04.399109 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:04.399429 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:04.399479 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:04.898171 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:04.898241 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:04.898523 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:05.398299 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:05.398375 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:05.398691 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:05.898283 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:05.898372 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:05.898673 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:06.398257 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:06.398336 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:06.398612 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:06.898577 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:06.898653 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:06.899006 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:06.899062 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:07.398886 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:07.398973 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:07.399304 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:07.899089 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:07.899159 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:07.899439 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:08.399244 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:08.399316 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:08.399642 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:08.898339 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:08.898425 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:08.898755 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:09.398430 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:09.398498 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:09.398754 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:09.398796 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:09.898279 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:09.898378 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:09.898704 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:10.398393 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:10.398469 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:10.398815 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:10.898372 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:10.898442 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:10.898709 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:11.398311 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:11.398389 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:11.398776 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:11.398848 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:11.898377 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:11.898455 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:11.898804 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:12.398256 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:12.398324 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:12.398587 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:12.898265 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:12.898339 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:12.898691 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:13.398375 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:13.398449 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:13.398799 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:13.898228 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:13.898308 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:13.898581 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:13.898622 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:14.398260 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:14.398340 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:14.398658 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:14.898262 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:14.898344 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:14.898707 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:15.398332 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:15.398408 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:15.398675 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:15.898289 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:15.898368 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:15.898652 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:15.898699 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:16.398290 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:16.398365 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:16.398727 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:16.898702 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:16.898784 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:16.899056 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:17.398983 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:17.399055 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:17.399412 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:17.899241 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:17.899319 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:17.899615 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:17.899667 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:18.398328 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:18.398395 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:18.398676 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:18.898311 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:18.898389 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:18.898756 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:19.398447 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:19.398552 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:19.398855 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:19.898524 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:19.898598 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:19.898881 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:20.398260 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:20.398339 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:20.398672 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:20.398727 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:20.898288 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:20.898361 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:20.898685 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:21.398238 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:21.398309 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:21.398582 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:21.898316 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:21.898391 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:21.898740 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:22.398286 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:22.398366 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:22.398717 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:22.398773 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:22.898431 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:22.898499 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:22.898765 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:23.398284 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:23.398368 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:23.398729 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:23.898447 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:23.898524 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:23.898868 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:24.398560 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:24.398637 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:24.398927 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:24.398969 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:24.898283 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:24.898357 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:24.898696 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:25.398285 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:25.398368 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:25.398721 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:25.898222 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:25.898307 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:25.898627 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:26.398280 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:26.398362 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:26.398712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:26.898725 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:26.898800 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:26.899142 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:26.899196 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:27.398976 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:27.399052 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:27.399314 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:27.899092 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:27.899164 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:27.899471 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:28.398223 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:28.398299 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:28.398602 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:28.898256 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:28.898325 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:28.898655 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:29.398287 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:29.398360 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:29.398693 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:29.398750 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:29.898408 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:29.898505 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:29.898906 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:30.398225 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:30.398302 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:30.398631 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:30.898286 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:30.898375 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:30.898730 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:31.398439 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:31.398517 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:31.398856 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:31.398911 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:31.898555 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:31.898623 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:31.898889 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:32.398937 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:32.399013 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:32.399352 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:32.899143 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:32.899220 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:32.899571 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:33.398155 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:33.398227 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:33.398484 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:33.898182 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:33.898255 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:33.898595 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:33.898651 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:34.398324 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:34.398396 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:34.398738 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:34.898420 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:34.898491 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:34.898769 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:35.398292 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:35.398369 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:35.398658 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:35.898356 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:35.898432 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:35.898728 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:35.898819 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:36.398478 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:36.398549 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:36.398814 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:36.898859 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:36.898933 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:36.899273 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:37.399136 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:37.399213 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:37.399567 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:37.898258 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:37.898329 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:37.898588 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:38.398300 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:38.398379 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:38.398666 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:38.398713 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:38.898281 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:38.898356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:38.898708 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:39.398215 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:39.398283 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:39.398608 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:39.898292 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:39.898365 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:39.898735 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:40.398290 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:40.398419 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:40.398713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:40.398761 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:40.898223 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:40.898291 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:40.898631 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:41.398327 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:41.398405 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:41.398732 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:41.898307 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:41.898393 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:41.898757 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:42.398724 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:42.398796 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:42.399059 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:42.399111 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:42.898855 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:42.898936 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:42.899284 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:43.399100 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:43.399176 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:43.399519 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:43.898212 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:43.898287 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:43.898548 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:44.398253 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:44.398333 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:44.398697 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:44.898401 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:44.898475 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:44.898804 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:44.898860 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:45.398241 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:45.398315 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:45.398573 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:45.898329 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:45.898404 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:45.898750 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:46.398288 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:46.398359 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:46.398673 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:46.898698 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:46.898768 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:46.899039 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:46.899080 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:47.398977 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:47.399049 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:47.399400 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:47.899044 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:47.899122 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:47.899468 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:48.398202 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:48.398275 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:48.398540 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:48.898231 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:48.898304 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:48.898650 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:49.398232 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:49.398318 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:49.398653 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:49.398711 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:49.898340 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:49.898415 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:49.898682 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:50.398255 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:50.398337 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:50.398634 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:50.898338 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:50.898429 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:50.898764 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:51.398436 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:51.398506 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:51.398820 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:51.398875 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:51.898249 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:51.898343 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:51.898647 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:52.398329 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:52.398423 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:52.398786 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:52.898247 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:52.898320 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:52.898634 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:53.398288 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:53.398360 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:53.398691 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:53.898307 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:53.898414 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:53.898758 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:53.898813 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:54.398461 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:54.398534 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:54.398794 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:54.898301 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:54.898376 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:54.898766 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:55.398305 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:55.398390 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:55.398708 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:55.898252 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:55.898321 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:55.898601 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:56.398276 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:56.398353 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:56.398704 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:56.398769 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:56.898725 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:56.898806 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:56.899207 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:57.398957 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:57.399027 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:57.399310 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:57.899115 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:57.899188 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:57.899518 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:58.398225 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:58.398296 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:58.398611 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:58.898289 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:58.898363 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:58.898624 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:58.898670 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:59.398281 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:59.398361 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:59.398712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:59.898427 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:59.898517 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:59.898807 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:00.398278 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:00.398364 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:00.399475 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 06:38:00.898197 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:00.898269 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:00.898604 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:01.398343 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:01.398423 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:01.398732 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:01.398781 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:01.898309 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:01.898387 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:01.898662 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:02.398274 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:02.398348 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:02.398666 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:02.898354 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:02.898429 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:02.898739 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:03.398239 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:03.398307 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:03.398615 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:03.898236 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:03.898311 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:03.898646 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:03.898700 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:04.398254 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:04.398336 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:04.398687 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:04.898364 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:04.898443 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:04.898706 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:05.398258 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:05.398338 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:05.398679 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:05.898384 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:05.898464 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:05.898794 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:05.898848 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:06.398478 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:06.398546 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:06.398819 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:06.898821 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:06.898898 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:06.899244 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:07.399095 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:07.399177 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:07.399526 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:07.898233 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:07.898305 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:07.898583 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:08.398273 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:08.398355 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:08.398689 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:08.398747 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:08.898439 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:08.898512 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:08.898861 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:09.398318 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:09.398389 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:09.398662 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:09.898282 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:09.898371 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:09.898697 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:10.398289 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:10.398372 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:10.398704 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:10.898271 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:10.898351 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:10.898646 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:10.898697 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:11.398274 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:11.398356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:11.398699 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:11.898267 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:11.898346 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:11.898692 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:12.398257 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:12.398345 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:12.398686 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:12.898310 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:12.898387 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:12.898713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:12.898765 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:13.398455 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:13.398532 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:13.398909 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:13.898601 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:13.898682 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:13.899003 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:14.398291 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:14.398366 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:14.398694 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:14.898453 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:14.898549 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:14.898911 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:14.898969 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:15.398256 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:15.398338 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:15.398607 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:15.898340 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:15.898416 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:15.898765 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:16.398312 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:16.398390 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:16.398677 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:16.898563 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:16.898635 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:16.898893 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:17.398825 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:17.398897 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:17.399203 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:17.399251 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:17.899015 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:17.899092 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:17.899429 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:18.399192 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:18.399272 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:18.399543 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:18.898305 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:18.898380 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:18.898701 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:19.398329 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:19.398405 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:19.398708 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:19.898230 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:19.898303 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:19.898634 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:19.898691 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:20.398293 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:20.398368 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:20.398701 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:20.898295 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:20.898370 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:20.898697 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:21.398453 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:21.398559 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:21.398856 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:21.898276 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:21.898357 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:21.898729 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:21.898782 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:22.398291 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:22.398376 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:22.398740 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:22.898287 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:22.898360 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:22.898617 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:23.398307 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:23.398396 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:23.398750 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:23.898299 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:23.898375 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:23.898725 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:24.398289 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:24.398356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:24.398635 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:24.398676 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:24.898264 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:24.898338 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:24.898687 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:25.398443 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:25.398523 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:25.398874 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:25.898588 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:25.898660 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:25.898920 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:26.398605 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:26.398677 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:26.399010 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:26.399063 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:26.898789 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:26.898863 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:26.899190 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:27.400218 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:27.400306 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:27.400637 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:27.898246 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:27.898312 1633651 node_ready.go:38] duration metric: took 6m0.000267561s for node "functional-364120" to be "Ready" ...
	I1216 06:38:27.901509 1633651 out.go:203] 
	W1216 06:38:27.904340 1633651 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1216 06:38:27.904359 1633651 out.go:285] * 
	W1216 06:38:27.906499 1633651 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 06:38:27.909191 1633651 out.go:203] 
	
	
	==> CRI-O <==
	Dec 16 06:38:36 functional-364120 crio[5357]: time="2025-12-16T06:38:36.548206424Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=37ae7426-45c2-45bc-a7f9-b14a371314ac name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:37 functional-364120 crio[5357]: time="2025-12-16T06:38:37.605426434Z" level=info msg="Checking image status: minikube-local-cache-test:functional-364120" id=13eb0b8e-5049-44e4-87c5-72abd7d1dca5 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:37 functional-364120 crio[5357]: time="2025-12-16T06:38:37.605630563Z" level=info msg="Resolving \"minikube-local-cache-test\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 16 06:38:37 functional-364120 crio[5357]: time="2025-12-16T06:38:37.605685439Z" level=info msg="Image minikube-local-cache-test:functional-364120 not found" id=13eb0b8e-5049-44e4-87c5-72abd7d1dca5 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:37 functional-364120 crio[5357]: time="2025-12-16T06:38:37.605776106Z" level=info msg="Neither image nor artfiact minikube-local-cache-test:functional-364120 found" id=13eb0b8e-5049-44e4-87c5-72abd7d1dca5 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:37 functional-364120 crio[5357]: time="2025-12-16T06:38:37.629361141Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-364120" id=b996cd7c-bf1b-4d21-aa33-2c27e8f7fc09 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:37 functional-364120 crio[5357]: time="2025-12-16T06:38:37.629523825Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-364120 not found" id=b996cd7c-bf1b-4d21-aa33-2c27e8f7fc09 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:37 functional-364120 crio[5357]: time="2025-12-16T06:38:37.629576436Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-364120 found" id=b996cd7c-bf1b-4d21-aa33-2c27e8f7fc09 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:37 functional-364120 crio[5357]: time="2025-12-16T06:38:37.653340032Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-364120" id=767a7891-029a-4860-8349-88781764a026 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:37 functional-364120 crio[5357]: time="2025-12-16T06:38:37.653499417Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-364120 not found" id=767a7891-029a-4860-8349-88781764a026 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:37 functional-364120 crio[5357]: time="2025-12-16T06:38:37.653554843Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-364120 found" id=767a7891-029a-4860-8349-88781764a026 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:38 functional-364120 crio[5357]: time="2025-12-16T06:38:38.620908719Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=1045cfc3-a374-4471-9bcc-7fb60eb5cce5 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:38 functional-364120 crio[5357]: time="2025-12-16T06:38:38.970770396Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=9246ae91-cfa3-4179-8016-7029975f27bd name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:38 functional-364120 crio[5357]: time="2025-12-16T06:38:38.97091891Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=9246ae91-cfa3-4179-8016-7029975f27bd name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:38 functional-364120 crio[5357]: time="2025-12-16T06:38:38.970958188Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=9246ae91-cfa3-4179-8016-7029975f27bd name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:39 functional-364120 crio[5357]: time="2025-12-16T06:38:39.542340366Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=15620261-c883-4179-8c3d-551c5846372d name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:39 functional-364120 crio[5357]: time="2025-12-16T06:38:39.542645549Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=15620261-c883-4179-8c3d-551c5846372d name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:39 functional-364120 crio[5357]: time="2025-12-16T06:38:39.542771909Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=15620261-c883-4179-8c3d-551c5846372d name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:39 functional-364120 crio[5357]: time="2025-12-16T06:38:39.594827473Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=66d90bf1-74af-4ebc-8ecb-c345e0cabdf9 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:39 functional-364120 crio[5357]: time="2025-12-16T06:38:39.594957254Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=66d90bf1-74af-4ebc-8ecb-c345e0cabdf9 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:39 functional-364120 crio[5357]: time="2025-12-16T06:38:39.594997041Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=66d90bf1-74af-4ebc-8ecb-c345e0cabdf9 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:39 functional-364120 crio[5357]: time="2025-12-16T06:38:39.621225571Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=8c8c2628-faae-47f6-82e4-f68829c2ead6 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:39 functional-364120 crio[5357]: time="2025-12-16T06:38:39.621384145Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=8c8c2628-faae-47f6-82e4-f68829c2ead6 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:39 functional-364120 crio[5357]: time="2025-12-16T06:38:39.621433507Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=8c8c2628-faae-47f6-82e4-f68829c2ead6 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:40 functional-364120 crio[5357]: time="2025-12-16T06:38:40.217809837Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=4602658b-593b-43b3-a28f-0dcd69a07939 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:38:41.758486    9338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:38:41.759259    9338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:38:41.760945    9338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:38:41.761231    9338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:38:41.762685    9338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 06:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec16 06:13] overlayfs: idmapped layers are currently not supported
	[Dec16 06:19] overlayfs: idmapped layers are currently not supported
	[Dec16 06:20] overlayfs: idmapped layers are currently not supported
	[Dec16 06:38] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 06:38:41 up  9:21,  0 user,  load average: 0.45, 0.33, 0.79
	Linux functional-364120 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 06:38:39 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:38:40 functional-364120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1152.
	Dec 16 06:38:40 functional-364120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:38:40 functional-364120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:38:40 functional-364120 kubelet[9221]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:38:40 functional-364120 kubelet[9221]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:38:40 functional-364120 kubelet[9221]: E1216 06:38:40.206394    9221 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:38:40 functional-364120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:38:40 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:38:40 functional-364120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1153.
	Dec 16 06:38:40 functional-364120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:38:40 functional-364120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:38:40 functional-364120 kubelet[9250]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:38:40 functional-364120 kubelet[9250]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:38:40 functional-364120 kubelet[9250]: E1216 06:38:40.971257    9250 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:38:40 functional-364120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:38:40 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:38:41 functional-364120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1154.
	Dec 16 06:38:41 functional-364120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:38:41 functional-364120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:38:41 functional-364120 kubelet[9321]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:38:41 functional-364120 kubelet[9321]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:38:41 functional-364120 kubelet[9321]: E1216 06:38:41.713400    9321 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:38:41 functional-364120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:38:41 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-364120 -n functional-364120
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-364120 -n functional-364120: exit status 2 (325.27606ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-364120" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-364120 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-364120 get pods: exit status 1 (110.966738ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-364120 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-364120
helpers_test.go:244: (dbg) docker inspect functional-364120:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf",
	        "Created": "2025-12-16T06:24:05.281524036Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1628059,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T06:24:05.346294886Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/hostname",
	        "HostsPath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/hosts",
	        "LogPath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf-json.log",
	        "Name": "/functional-364120",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-364120:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-364120",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf",
	                "LowerDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752-init/diff:/var/lib/docker/overlay2/bf9e5e3f04a34ae52d17b5e81aeacb3854428b2bda7b4fcb7e1d86558db759ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752/merged",
	                "UpperDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752/diff",
	                "WorkDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-364120",
	                "Source": "/var/lib/docker/volumes/functional-364120/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-364120",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-364120",
	                "name.minikube.sigs.k8s.io": "functional-364120",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ca8e444af5ea4dc220aae407b23205e89ee2c7bfaf0d7da28c0fa8a6e9438a0b",
	            "SandboxKey": "/var/run/docker/netns/ca8e444af5ea",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34260"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34261"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34264"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34262"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34263"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-364120": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:28:ec:c3:f0:f5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a6847428577f52c75d7f6ab7a92b3395c1204da1608971d5af98d3898a2210da",
	                    "EndpointID": "e579fd8a0ba117da836073d37b7f617933568bedfc3fb52e056b4772aaddecbf",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-364120",
	                        "8e0dcfb5d015"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-364120 -n functional-364120
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-364120 -n functional-364120: exit status 2 (313.281384ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-364120 logs -n 25: (1.0181662s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-487532 image build -t localhost/my-image:functional-487532 testdata/build --alsologtostderr                                            │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image          │ functional-487532 image ls --format json --alsologtostderr                                                                                        │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image          │ functional-487532 image ls --format table --alsologtostderr                                                                                       │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ update-context │ functional-487532 update-context --alsologtostderr -v=2                                                                                           │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ update-context │ functional-487532 update-context --alsologtostderr -v=2                                                                                           │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ update-context │ functional-487532 update-context --alsologtostderr -v=2                                                                                           │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image          │ functional-487532 image ls                                                                                                                        │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ delete         │ -p functional-487532                                                                                                                              │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:24 UTC │
	│ start          │ -p functional-364120 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:24 UTC │                     │
	│ start          │ -p functional-364120 --alsologtostderr -v=8                                                                                                       │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:32 UTC │                     │
	│ cache          │ functional-364120 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ cache          │ functional-364120 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ cache          │ functional-364120 cache add registry.k8s.io/pause:latest                                                                                          │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ cache          │ functional-364120 cache add minikube-local-cache-test:functional-364120                                                                           │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ cache          │ functional-364120 cache delete minikube-local-cache-test:functional-364120                                                                        │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ cache          │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ ssh            │ functional-364120 ssh sudo crictl images                                                                                                          │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ ssh            │ functional-364120 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ ssh            │ functional-364120 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │                     │
	│ cache          │ functional-364120 cache reload                                                                                                                    │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ ssh            │ functional-364120 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ kubectl        │ functional-364120 kubectl -- --context functional-364120 get pods                                                                                 │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 06:32:21
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 06:32:21.945678 1633651 out.go:360] Setting OutFile to fd 1 ...
	I1216 06:32:21.945884 1633651 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:32:21.945913 1633651 out.go:374] Setting ErrFile to fd 2...
	I1216 06:32:21.945938 1633651 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:32:21.946236 1633651 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 06:32:21.946683 1633651 out.go:368] Setting JSON to false
	I1216 06:32:21.947701 1633651 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":33293,"bootTime":1765833449,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1216 06:32:21.947809 1633651 start.go:143] virtualization:  
	I1216 06:32:21.951426 1633651 out.go:179] * [functional-364120] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 06:32:21.955191 1633651 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 06:32:21.955256 1633651 notify.go:221] Checking for updates...
	I1216 06:32:21.958173 1633651 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 06:32:21.961154 1633651 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:32:21.964261 1633651 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube
	I1216 06:32:21.967271 1633651 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 06:32:21.970206 1633651 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 06:32:21.973784 1633651 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 06:32:21.973958 1633651 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 06:32:22.008677 1633651 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 06:32:22.008820 1633651 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:32:22.071471 1633651 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 06:32:22.061898568 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 06:32:22.071599 1633651 docker.go:319] overlay module found
	I1216 06:32:22.074586 1633651 out.go:179] * Using the docker driver based on existing profile
	I1216 06:32:22.077482 1633651 start.go:309] selected driver: docker
	I1216 06:32:22.077504 1633651 start.go:927] validating driver "docker" against &{Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:32:22.077607 1633651 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 06:32:22.077718 1633651 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:32:22.133247 1633651 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 06:32:22.124039104 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 06:32:22.133687 1633651 cni.go:84] Creating CNI manager for ""
	I1216 06:32:22.133753 1633651 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 06:32:22.133810 1633651 start.go:353] cluster config:
	{Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:32:22.136881 1633651 out.go:179] * Starting "functional-364120" primary control-plane node in "functional-364120" cluster
	I1216 06:32:22.139682 1633651 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 06:32:22.142506 1633651 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 06:32:22.145532 1633651 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 06:32:22.145589 1633651 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1216 06:32:22.145600 1633651 cache.go:65] Caching tarball of preloaded images
	I1216 06:32:22.145641 1633651 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 06:32:22.145690 1633651 preload.go:238] Found /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 06:32:22.145701 1633651 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1216 06:32:22.145813 1633651 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/config.json ...
	I1216 06:32:22.165180 1633651 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 06:32:22.165200 1633651 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 06:32:22.165222 1633651 cache.go:243] Successfully downloaded all kic artifacts
	I1216 06:32:22.165256 1633651 start.go:360] acquireMachinesLock for functional-364120: {Name:mkbf042218fd4d1baa11f8b1e4a71170f4ad9912 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:32:22.165333 1633651 start.go:364] duration metric: took 48.796µs to acquireMachinesLock for "functional-364120"
	I1216 06:32:22.165354 1633651 start.go:96] Skipping create...Using existing machine configuration
	I1216 06:32:22.165360 1633651 fix.go:54] fixHost starting: 
	I1216 06:32:22.165613 1633651 cli_runner.go:164] Run: docker container inspect functional-364120 --format={{.State.Status}}
	I1216 06:32:22.182587 1633651 fix.go:112] recreateIfNeeded on functional-364120: state=Running err=<nil>
	W1216 06:32:22.182616 1633651 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 06:32:22.185776 1633651 out.go:252] * Updating the running docker "functional-364120" container ...
	I1216 06:32:22.185814 1633651 machine.go:94] provisionDockerMachine start ...
	I1216 06:32:22.185896 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:22.204643 1633651 main.go:143] libmachine: Using SSH client type: native
	I1216 06:32:22.205060 1633651 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1216 06:32:22.205076 1633651 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 06:32:22.340733 1633651 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-364120
	
	I1216 06:32:22.340761 1633651 ubuntu.go:182] provisioning hostname "functional-364120"
	I1216 06:32:22.340833 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:22.359374 1633651 main.go:143] libmachine: Using SSH client type: native
	I1216 06:32:22.359683 1633651 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1216 06:32:22.359701 1633651 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-364120 && echo "functional-364120" | sudo tee /etc/hostname
	I1216 06:32:22.513698 1633651 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-364120
	
	I1216 06:32:22.513777 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:22.532110 1633651 main.go:143] libmachine: Using SSH client type: native
	I1216 06:32:22.532428 1633651 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1216 06:32:22.532445 1633651 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-364120' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-364120/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-364120' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 06:32:22.668828 1633651 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 06:32:22.668856 1633651 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-1596013/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-1596013/.minikube}
	I1216 06:32:22.668881 1633651 ubuntu.go:190] setting up certificates
	I1216 06:32:22.668900 1633651 provision.go:84] configureAuth start
	I1216 06:32:22.668975 1633651 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-364120
	I1216 06:32:22.686750 1633651 provision.go:143] copyHostCerts
	I1216 06:32:22.686794 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 06:32:22.686839 1633651 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem, removing ...
	I1216 06:32:22.686850 1633651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 06:32:22.686924 1633651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem (1675 bytes)
	I1216 06:32:22.687014 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 06:32:22.687038 1633651 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem, removing ...
	I1216 06:32:22.687049 1633651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 06:32:22.687078 1633651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem (1078 bytes)
	I1216 06:32:22.687125 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 06:32:22.687146 1633651 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem, removing ...
	I1216 06:32:22.687154 1633651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 06:32:22.687181 1633651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem (1123 bytes)
	I1216 06:32:22.687234 1633651 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem org=jenkins.functional-364120 san=[127.0.0.1 192.168.49.2 functional-364120 localhost minikube]
	I1216 06:32:22.948191 1633651 provision.go:177] copyRemoteCerts
	I1216 06:32:22.948261 1633651 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 06:32:22.948301 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:22.965164 1633651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:32:23.060207 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1216 06:32:23.060306 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 06:32:23.077647 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1216 06:32:23.077712 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 06:32:23.095215 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1216 06:32:23.095292 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 06:32:23.112813 1633651 provision.go:87] duration metric: took 443.895655ms to configureAuth
	I1216 06:32:23.112841 1633651 ubuntu.go:206] setting minikube options for container-runtime
	I1216 06:32:23.113039 1633651 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 06:32:23.113160 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:23.130832 1633651 main.go:143] libmachine: Using SSH client type: native
	I1216 06:32:23.131171 1633651 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1216 06:32:23.131200 1633651 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 06:32:23.456336 1633651 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 06:32:23.456407 1633651 machine.go:97] duration metric: took 1.270583728s to provisionDockerMachine
	I1216 06:32:23.456430 1633651 start.go:293] postStartSetup for "functional-364120" (driver="docker")
	I1216 06:32:23.456444 1633651 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 06:32:23.456549 1633651 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 06:32:23.456623 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:23.474584 1633651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:32:23.572573 1633651 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 06:32:23.576065 1633651 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1216 06:32:23.576089 1633651 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1216 06:32:23.576094 1633651 command_runner.go:130] > VERSION_ID="12"
	I1216 06:32:23.576099 1633651 command_runner.go:130] > VERSION="12 (bookworm)"
	I1216 06:32:23.576104 1633651 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1216 06:32:23.576107 1633651 command_runner.go:130] > ID=debian
	I1216 06:32:23.576111 1633651 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1216 06:32:23.576116 1633651 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1216 06:32:23.576121 1633651 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1216 06:32:23.576161 1633651 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 06:32:23.576184 1633651 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 06:32:23.576195 1633651 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/addons for local assets ...
	I1216 06:32:23.576257 1633651 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/files for local assets ...
	I1216 06:32:23.576334 1633651 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> 15992552.pem in /etc/ssl/certs
	I1216 06:32:23.576345 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> /etc/ssl/certs/15992552.pem
	I1216 06:32:23.576419 1633651 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/test/nested/copy/1599255/hosts -> hosts in /etc/test/nested/copy/1599255
	I1216 06:32:23.576428 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/test/nested/copy/1599255/hosts -> /etc/test/nested/copy/1599255/hosts
	I1216 06:32:23.576497 1633651 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1599255
	I1216 06:32:23.584272 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 06:32:23.602073 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/test/nested/copy/1599255/hosts --> /etc/test/nested/copy/1599255/hosts (40 bytes)
	I1216 06:32:23.620211 1633651 start.go:296] duration metric: took 163.749097ms for postStartSetup
	I1216 06:32:23.620332 1633651 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 06:32:23.620393 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:23.637607 1633651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:32:23.729817 1633651 command_runner.go:130] > 11%
	I1216 06:32:23.729920 1633651 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 06:32:23.734460 1633651 command_runner.go:130] > 173G
	I1216 06:32:23.734888 1633651 fix.go:56] duration metric: took 1.569523929s for fixHost
	I1216 06:32:23.734910 1633651 start.go:83] releasing machines lock for "functional-364120", held for 1.569567934s
	I1216 06:32:23.734992 1633651 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-364120
	I1216 06:32:23.753392 1633651 ssh_runner.go:195] Run: cat /version.json
	I1216 06:32:23.753419 1633651 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 06:32:23.753445 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:23.753482 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:23.775365 1633651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:32:23.776190 1633651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:32:23.872489 1633651 command_runner.go:130] > {"iso_version": "v1.37.0-1765579389-22117", "kicbase_version": "v0.0.48-1765661130-22141", "minikube_version": "v1.37.0", "commit": "cbb33128a244032d08f8fc6e6c9f03b30f0da3e4"}
	I1216 06:32:23.964085 1633651 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1216 06:32:23.966949 1633651 ssh_runner.go:195] Run: systemctl --version
	I1216 06:32:23.972881 1633651 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1216 06:32:23.972927 1633651 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1216 06:32:23.973332 1633651 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 06:32:24.017041 1633651 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1216 06:32:24.021688 1633651 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1216 06:32:24.021875 1633651 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 06:32:24.021943 1633651 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 06:32:24.030849 1633651 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 06:32:24.030874 1633651 start.go:496] detecting cgroup driver to use...
	I1216 06:32:24.030909 1633651 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:32:24.030973 1633651 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 06:32:24.046872 1633651 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:32:24.060299 1633651 docker.go:218] disabling cri-docker service (if available) ...
	I1216 06:32:24.060392 1633651 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 06:32:24.076826 1633651 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 06:32:24.090325 1633651 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 06:32:24.210022 1633651 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 06:32:24.329836 1633651 docker.go:234] disabling docker service ...
	I1216 06:32:24.329935 1633651 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 06:32:24.345813 1633651 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 06:32:24.359799 1633651 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 06:32:24.482084 1633651 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 06:32:24.592216 1633651 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 06:32:24.607323 1633651 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 06:32:24.620059 1633651 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1216 06:32:24.621570 1633651 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 06:32:24.621685 1633651 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:32:24.630471 1633651 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 06:32:24.630583 1633651 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:32:24.638917 1633651 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:32:24.647722 1633651 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:32:24.656274 1633651 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 06:32:24.664335 1633651 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:32:24.674249 1633651 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:32:24.682423 1633651 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:32:24.691805 1633651 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 06:32:24.699096 1633651 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1216 06:32:24.700134 1633651 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 06:32:24.707996 1633651 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:32:24.828004 1633651 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 06:32:24.995020 1633651 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 06:32:24.995147 1633651 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 06:32:24.998673 1633651 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1216 06:32:24.998710 1633651 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1216 06:32:24.998717 1633651 command_runner.go:130] > Device: 0,73	Inode: 1638        Links: 1
	I1216 06:32:24.998724 1633651 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1216 06:32:24.998732 1633651 command_runner.go:130] > Access: 2025-12-16 06:32:24.929681899 +0000
	I1216 06:32:24.998737 1633651 command_runner.go:130] > Modify: 2025-12-16 06:32:24.929681899 +0000
	I1216 06:32:24.998743 1633651 command_runner.go:130] > Change: 2025-12-16 06:32:24.929681899 +0000
	I1216 06:32:24.998747 1633651 command_runner.go:130] >  Birth: -
	I1216 06:32:24.999054 1633651 start.go:564] Will wait 60s for crictl version
	I1216 06:32:24.999171 1633651 ssh_runner.go:195] Run: which crictl
	I1216 06:32:25.003803 1633651 command_runner.go:130] > /usr/local/bin/crictl
	I1216 06:32:25.003920 1633651 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 06:32:25.030365 1633651 command_runner.go:130] > Version:  0.1.0
	I1216 06:32:25.030401 1633651 command_runner.go:130] > RuntimeName:  cri-o
	I1216 06:32:25.030407 1633651 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1216 06:32:25.030415 1633651 command_runner.go:130] > RuntimeApiVersion:  v1
	I1216 06:32:25.032653 1633651 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 06:32:25.032766 1633651 ssh_runner.go:195] Run: crio --version
	I1216 06:32:25.062220 1633651 command_runner.go:130] > crio version 1.34.3
	I1216 06:32:25.062244 1633651 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1216 06:32:25.062252 1633651 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1216 06:32:25.062258 1633651 command_runner.go:130] >    GitTreeState:   dirty
	I1216 06:32:25.062271 1633651 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1216 06:32:25.062277 1633651 command_runner.go:130] >    GoVersion:      go1.24.6
	I1216 06:32:25.062281 1633651 command_runner.go:130] >    Compiler:       gc
	I1216 06:32:25.062287 1633651 command_runner.go:130] >    Platform:       linux/arm64
	I1216 06:32:25.062295 1633651 command_runner.go:130] >    Linkmode:       static
	I1216 06:32:25.062298 1633651 command_runner.go:130] >    BuildTags:
	I1216 06:32:25.062306 1633651 command_runner.go:130] >      static
	I1216 06:32:25.062310 1633651 command_runner.go:130] >      netgo
	I1216 06:32:25.062314 1633651 command_runner.go:130] >      osusergo
	I1216 06:32:25.062318 1633651 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1216 06:32:25.062324 1633651 command_runner.go:130] >      seccomp
	I1216 06:32:25.062328 1633651 command_runner.go:130] >      apparmor
	I1216 06:32:25.062335 1633651 command_runner.go:130] >      selinux
	I1216 06:32:25.062355 1633651 command_runner.go:130] >    LDFlags:          unknown
	I1216 06:32:25.062366 1633651 command_runner.go:130] >    SeccompEnabled:   true
	I1216 06:32:25.062371 1633651 command_runner.go:130] >    AppArmorEnabled:  false
	I1216 06:32:25.062783 1633651 ssh_runner.go:195] Run: crio --version
	I1216 06:32:25.091083 1633651 command_runner.go:130] > crio version 1.34.3
	I1216 06:32:25.091135 1633651 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1216 06:32:25.091142 1633651 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1216 06:32:25.091169 1633651 command_runner.go:130] >    GitTreeState:   dirty
	I1216 06:32:25.091182 1633651 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1216 06:32:25.091188 1633651 command_runner.go:130] >    GoVersion:      go1.24.6
	I1216 06:32:25.091193 1633651 command_runner.go:130] >    Compiler:       gc
	I1216 06:32:25.091205 1633651 command_runner.go:130] >    Platform:       linux/arm64
	I1216 06:32:25.091210 1633651 command_runner.go:130] >    Linkmode:       static
	I1216 06:32:25.091218 1633651 command_runner.go:130] >    BuildTags:
	I1216 06:32:25.091223 1633651 command_runner.go:130] >      static
	I1216 06:32:25.091226 1633651 command_runner.go:130] >      netgo
	I1216 06:32:25.091230 1633651 command_runner.go:130] >      osusergo
	I1216 06:32:25.091244 1633651 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1216 06:32:25.091254 1633651 command_runner.go:130] >      seccomp
	I1216 06:32:25.091262 1633651 command_runner.go:130] >      apparmor
	I1216 06:32:25.091274 1633651 command_runner.go:130] >      selinux
	I1216 06:32:25.091278 1633651 command_runner.go:130] >    LDFlags:          unknown
	I1216 06:32:25.091282 1633651 command_runner.go:130] >    SeccompEnabled:   true
	I1216 06:32:25.091286 1633651 command_runner.go:130] >    AppArmorEnabled:  false
	I1216 06:32:25.097058 1633651 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1216 06:32:25.100055 1633651 cli_runner.go:164] Run: docker network inspect functional-364120 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 06:32:25.116990 1633651 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 06:32:25.121062 1633651 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1216 06:32:25.121217 1633651 kubeadm.go:884] updating cluster {Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 06:32:25.121338 1633651 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 06:32:25.121400 1633651 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 06:32:25.161132 1633651 command_runner.go:130] > {
	I1216 06:32:25.161156 1633651 command_runner.go:130] >   "images":  [
	I1216 06:32:25.161162 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161171 1633651 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1216 06:32:25.161176 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161183 1633651 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1216 06:32:25.161197 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161202 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161212 1633651 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1216 06:32:25.161220 1633651 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1216 06:32:25.161224 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161229 1633651 command_runner.go:130] >       "size":  "111333938",
	I1216 06:32:25.161237 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.161245 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.161248 1633651 command_runner.go:130] >     },
	I1216 06:32:25.161253 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161267 1633651 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1216 06:32:25.161272 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161278 1633651 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1216 06:32:25.161289 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161295 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161303 1633651 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1216 06:32:25.161313 1633651 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1216 06:32:25.161317 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161325 1633651 command_runner.go:130] >       "size":  "29037500",
	I1216 06:32:25.161333 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.161342 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.161350 1633651 command_runner.go:130] >     },
	I1216 06:32:25.161353 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161360 1633651 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1216 06:32:25.161368 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161373 1633651 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1216 06:32:25.161376 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161380 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161388 1633651 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1216 06:32:25.161400 1633651 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1216 06:32:25.161403 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161408 1633651 command_runner.go:130] >       "size":  "74491780",
	I1216 06:32:25.161415 1633651 command_runner.go:130] >       "username":  "nonroot",
	I1216 06:32:25.161424 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.161431 1633651 command_runner.go:130] >     },
	I1216 06:32:25.161435 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161442 1633651 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1216 06:32:25.161450 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161456 1633651 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1216 06:32:25.161459 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161469 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161477 1633651 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1216 06:32:25.161485 1633651 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1216 06:32:25.161489 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161493 1633651 command_runner.go:130] >       "size":  "60857170",
	I1216 06:32:25.161499 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.161511 1633651 command_runner.go:130] >         "value":  "0"
	I1216 06:32:25.161514 1633651 command_runner.go:130] >       },
	I1216 06:32:25.161529 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.161540 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.161544 1633651 command_runner.go:130] >     },
	I1216 06:32:25.161554 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161567 1633651 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1216 06:32:25.161571 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161578 1633651 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1216 06:32:25.161582 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161588 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161601 1633651 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1216 06:32:25.161614 1633651 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1216 06:32:25.161618 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161623 1633651 command_runner.go:130] >       "size":  "84949999",
	I1216 06:32:25.161631 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.161636 1633651 command_runner.go:130] >         "value":  "0"
	I1216 06:32:25.161639 1633651 command_runner.go:130] >       },
	I1216 06:32:25.161643 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.161647 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.161667 1633651 command_runner.go:130] >     },
	I1216 06:32:25.161675 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161682 1633651 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1216 06:32:25.161686 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161692 1633651 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1216 06:32:25.161701 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161705 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161714 1633651 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1216 06:32:25.161726 1633651 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1216 06:32:25.161730 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161734 1633651 command_runner.go:130] >       "size":  "72170325",
	I1216 06:32:25.161738 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.161743 1633651 command_runner.go:130] >         "value":  "0"
	I1216 06:32:25.161748 1633651 command_runner.go:130] >       },
	I1216 06:32:25.161753 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.161758 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.161761 1633651 command_runner.go:130] >     },
	I1216 06:32:25.161764 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161771 1633651 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1216 06:32:25.161779 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161785 1633651 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1216 06:32:25.161788 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161793 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161801 1633651 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1216 06:32:25.161814 1633651 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1216 06:32:25.161818 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161822 1633651 command_runner.go:130] >       "size":  "74106775",
	I1216 06:32:25.161826 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.161830 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.161836 1633651 command_runner.go:130] >     },
	I1216 06:32:25.161839 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161846 1633651 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1216 06:32:25.161850 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161863 1633651 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1216 06:32:25.161870 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161874 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161882 1633651 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1216 06:32:25.161905 1633651 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1216 06:32:25.161913 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161918 1633651 command_runner.go:130] >       "size":  "49822549",
	I1216 06:32:25.161921 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.161925 1633651 command_runner.go:130] >         "value":  "0"
	I1216 06:32:25.161929 1633651 command_runner.go:130] >       },
	I1216 06:32:25.161933 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.161937 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.161943 1633651 command_runner.go:130] >     },
	I1216 06:32:25.161947 1633651 command_runner.go:130] >     {
	I1216 06:32:25.161956 1633651 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1216 06:32:25.161960 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.161965 1633651 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1216 06:32:25.161971 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.161975 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.161995 1633651 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1216 06:32:25.162003 1633651 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1216 06:32:25.162006 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.162010 1633651 command_runner.go:130] >       "size":  "519884",
	I1216 06:32:25.162013 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.162017 1633651 command_runner.go:130] >         "value":  "65535"
	I1216 06:32:25.162020 1633651 command_runner.go:130] >       },
	I1216 06:32:25.162029 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.162036 1633651 command_runner.go:130] >       "pinned":  true
	I1216 06:32:25.162040 1633651 command_runner.go:130] >     }
	I1216 06:32:25.162043 1633651 command_runner.go:130] >   ]
	I1216 06:32:25.162046 1633651 command_runner.go:130] > }
	I1216 06:32:25.162230 1633651 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 06:32:25.162244 1633651 crio.go:433] Images already preloaded, skipping extraction
	I1216 06:32:25.162311 1633651 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 06:32:25.189040 1633651 command_runner.go:130] > {
	I1216 06:32:25.189061 1633651 command_runner.go:130] >   "images":  [
	I1216 06:32:25.189066 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189085 1633651 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1216 06:32:25.189090 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189096 1633651 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1216 06:32:25.189100 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189103 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189112 1633651 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1216 06:32:25.189120 1633651 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1216 06:32:25.189125 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189133 1633651 command_runner.go:130] >       "size":  "111333938",
	I1216 06:32:25.189141 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.189146 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.189157 1633651 command_runner.go:130] >     },
	I1216 06:32:25.189161 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189168 1633651 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1216 06:32:25.189171 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189177 1633651 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1216 06:32:25.189180 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189184 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189193 1633651 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1216 06:32:25.189201 1633651 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1216 06:32:25.189204 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189208 1633651 command_runner.go:130] >       "size":  "29037500",
	I1216 06:32:25.189212 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.189217 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.189220 1633651 command_runner.go:130] >     },
	I1216 06:32:25.189223 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189230 1633651 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1216 06:32:25.189233 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189239 1633651 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1216 06:32:25.189242 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189246 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189255 1633651 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1216 06:32:25.189263 1633651 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1216 06:32:25.189266 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189270 1633651 command_runner.go:130] >       "size":  "74491780",
	I1216 06:32:25.189274 1633651 command_runner.go:130] >       "username":  "nonroot",
	I1216 06:32:25.189278 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.189281 1633651 command_runner.go:130] >     },
	I1216 06:32:25.189284 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189291 1633651 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1216 06:32:25.189295 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189300 1633651 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1216 06:32:25.189309 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189313 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189322 1633651 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1216 06:32:25.189330 1633651 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1216 06:32:25.189333 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189337 1633651 command_runner.go:130] >       "size":  "60857170",
	I1216 06:32:25.189341 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.189345 1633651 command_runner.go:130] >         "value":  "0"
	I1216 06:32:25.189348 1633651 command_runner.go:130] >       },
	I1216 06:32:25.189357 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.189361 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.189364 1633651 command_runner.go:130] >     },
	I1216 06:32:25.189367 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189375 1633651 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1216 06:32:25.189378 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189384 1633651 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1216 06:32:25.189387 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189391 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189399 1633651 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1216 06:32:25.189407 1633651 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1216 06:32:25.189411 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189420 1633651 command_runner.go:130] >       "size":  "84949999",
	I1216 06:32:25.189423 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.189427 1633651 command_runner.go:130] >         "value":  "0"
	I1216 06:32:25.189431 1633651 command_runner.go:130] >       },
	I1216 06:32:25.189435 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.189439 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.189444 1633651 command_runner.go:130] >     },
	I1216 06:32:25.189453 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189460 1633651 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1216 06:32:25.189464 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189469 1633651 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1216 06:32:25.189473 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189486 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189495 1633651 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1216 06:32:25.189505 1633651 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1216 06:32:25.189508 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189513 1633651 command_runner.go:130] >       "size":  "72170325",
	I1216 06:32:25.189516 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.189524 1633651 command_runner.go:130] >         "value":  "0"
	I1216 06:32:25.189527 1633651 command_runner.go:130] >       },
	I1216 06:32:25.189531 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.189536 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.189539 1633651 command_runner.go:130] >     },
	I1216 06:32:25.189542 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189549 1633651 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1216 06:32:25.189553 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189558 1633651 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1216 06:32:25.189561 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189564 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189572 1633651 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1216 06:32:25.189580 1633651 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1216 06:32:25.189583 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189587 1633651 command_runner.go:130] >       "size":  "74106775",
	I1216 06:32:25.189591 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.189595 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.189597 1633651 command_runner.go:130] >     },
	I1216 06:32:25.189600 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189607 1633651 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1216 06:32:25.189611 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189616 1633651 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1216 06:32:25.189620 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189623 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189631 1633651 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1216 06:32:25.189649 1633651 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1216 06:32:25.189653 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189660 1633651 command_runner.go:130] >       "size":  "49822549",
	I1216 06:32:25.189664 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.189668 1633651 command_runner.go:130] >         "value":  "0"
	I1216 06:32:25.189671 1633651 command_runner.go:130] >       },
	I1216 06:32:25.189675 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.189679 1633651 command_runner.go:130] >       "pinned":  false
	I1216 06:32:25.189682 1633651 command_runner.go:130] >     },
	I1216 06:32:25.189685 1633651 command_runner.go:130] >     {
	I1216 06:32:25.189691 1633651 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1216 06:32:25.189695 1633651 command_runner.go:130] >       "repoTags":  [
	I1216 06:32:25.189700 1633651 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1216 06:32:25.189703 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189707 1633651 command_runner.go:130] >       "repoDigests":  [
	I1216 06:32:25.189714 1633651 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1216 06:32:25.189722 1633651 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1216 06:32:25.189725 1633651 command_runner.go:130] >       ],
	I1216 06:32:25.189729 1633651 command_runner.go:130] >       "size":  "519884",
	I1216 06:32:25.189732 1633651 command_runner.go:130] >       "uid":  {
	I1216 06:32:25.189736 1633651 command_runner.go:130] >         "value":  "65535"
	I1216 06:32:25.189740 1633651 command_runner.go:130] >       },
	I1216 06:32:25.189744 1633651 command_runner.go:130] >       "username":  "",
	I1216 06:32:25.189748 1633651 command_runner.go:130] >       "pinned":  true
	I1216 06:32:25.189751 1633651 command_runner.go:130] >     }
	I1216 06:32:25.189754 1633651 command_runner.go:130] >   ]
	I1216 06:32:25.189758 1633651 command_runner.go:130] > }
	I1216 06:32:25.192082 1633651 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 06:32:25.192103 1633651 cache_images.go:86] Images are preloaded, skipping loading
	I1216 06:32:25.192110 1633651 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1216 06:32:25.192213 1633651 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-364120 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 06:32:25.192293 1633651 ssh_runner.go:195] Run: crio config
	I1216 06:32:25.241430 1633651 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1216 06:32:25.241454 1633651 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1216 06:32:25.241463 1633651 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1216 06:32:25.241467 1633651 command_runner.go:130] > #
	I1216 06:32:25.241474 1633651 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1216 06:32:25.241481 1633651 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1216 06:32:25.241487 1633651 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1216 06:32:25.241503 1633651 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1216 06:32:25.241507 1633651 command_runner.go:130] > # reload'.
	I1216 06:32:25.241513 1633651 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1216 06:32:25.241520 1633651 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1216 06:32:25.241526 1633651 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1216 06:32:25.241533 1633651 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1216 06:32:25.241546 1633651 command_runner.go:130] > [crio]
	I1216 06:32:25.241552 1633651 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1216 06:32:25.241558 1633651 command_runner.go:130] > # containers images, in this directory.
	I1216 06:32:25.242467 1633651 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1216 06:32:25.242525 1633651 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1216 06:32:25.243204 1633651 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1216 06:32:25.243220 1633651 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1216 06:32:25.243745 1633651 command_runner.go:130] > # imagestore = ""
	I1216 06:32:25.243759 1633651 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1216 06:32:25.243765 1633651 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1216 06:32:25.244384 1633651 command_runner.go:130] > # storage_driver = "overlay"
	I1216 06:32:25.244405 1633651 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1216 06:32:25.244412 1633651 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1216 06:32:25.244775 1633651 command_runner.go:130] > # storage_option = [
	I1216 06:32:25.245138 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.245151 1633651 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1216 06:32:25.245190 1633651 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1216 06:32:25.245804 1633651 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1216 06:32:25.245817 1633651 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1216 06:32:25.245829 1633651 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1216 06:32:25.245834 1633651 command_runner.go:130] > # always happen on a node reboot
	I1216 06:32:25.246485 1633651 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1216 06:32:25.246511 1633651 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1216 06:32:25.246534 1633651 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1216 06:32:25.246545 1633651 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1216 06:32:25.247059 1633651 command_runner.go:130] > # version_file_persist = ""
	I1216 06:32:25.247081 1633651 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1216 06:32:25.247091 1633651 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1216 06:32:25.247784 1633651 command_runner.go:130] > # internal_wipe = true
	I1216 06:32:25.247805 1633651 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1216 06:32:25.247812 1633651 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1216 06:32:25.248459 1633651 command_runner.go:130] > # internal_repair = true
	I1216 06:32:25.248493 1633651 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1216 06:32:25.248501 1633651 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1216 06:32:25.248507 1633651 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1216 06:32:25.249140 1633651 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1216 06:32:25.249157 1633651 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1216 06:32:25.249161 1633651 command_runner.go:130] > [crio.api]
	I1216 06:32:25.249167 1633651 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1216 06:32:25.251400 1633651 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1216 06:32:25.251419 1633651 command_runner.go:130] > # IP address on which the stream server will listen.
	I1216 06:32:25.251426 1633651 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1216 06:32:25.251453 1633651 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1216 06:32:25.251465 1633651 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1216 06:32:25.251470 1633651 command_runner.go:130] > # stream_port = "0"
	I1216 06:32:25.251476 1633651 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1216 06:32:25.251480 1633651 command_runner.go:130] > # stream_enable_tls = false
	I1216 06:32:25.251487 1633651 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1216 06:32:25.251494 1633651 command_runner.go:130] > # stream_idle_timeout = ""
	I1216 06:32:25.251501 1633651 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1216 06:32:25.251510 1633651 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1216 06:32:25.251527 1633651 command_runner.go:130] > # stream_tls_cert = ""
	I1216 06:32:25.251540 1633651 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1216 06:32:25.251546 1633651 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1216 06:32:25.251563 1633651 command_runner.go:130] > # stream_tls_key = ""
	I1216 06:32:25.251575 1633651 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1216 06:32:25.251585 1633651 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1216 06:32:25.251591 1633651 command_runner.go:130] > # automatically pick up the changes.
	I1216 06:32:25.251603 1633651 command_runner.go:130] > # stream_tls_ca = ""
	I1216 06:32:25.251622 1633651 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1216 06:32:25.251658 1633651 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1216 06:32:25.251672 1633651 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1216 06:32:25.251677 1633651 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1216 06:32:25.251692 1633651 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1216 06:32:25.251703 1633651 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1216 06:32:25.251707 1633651 command_runner.go:130] > [crio.runtime]
	I1216 06:32:25.251713 1633651 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1216 06:32:25.251719 1633651 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1216 06:32:25.251735 1633651 command_runner.go:130] > # "nofile=1024:2048"
	I1216 06:32:25.251746 1633651 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1216 06:32:25.251751 1633651 command_runner.go:130] > # default_ulimits = [
	I1216 06:32:25.251754 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.251760 1633651 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1216 06:32:25.251767 1633651 command_runner.go:130] > # no_pivot = false
	I1216 06:32:25.251773 1633651 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1216 06:32:25.251779 1633651 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1216 06:32:25.251788 1633651 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1216 06:32:25.251794 1633651 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1216 06:32:25.251799 1633651 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1216 06:32:25.251815 1633651 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1216 06:32:25.251827 1633651 command_runner.go:130] > # conmon = ""
	I1216 06:32:25.251832 1633651 command_runner.go:130] > # Cgroup setting for conmon
	I1216 06:32:25.251838 1633651 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1216 06:32:25.251853 1633651 command_runner.go:130] > conmon_cgroup = "pod"
	I1216 06:32:25.251866 1633651 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1216 06:32:25.251872 1633651 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1216 06:32:25.251879 1633651 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1216 06:32:25.251884 1633651 command_runner.go:130] > # conmon_env = [
	I1216 06:32:25.251887 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.251893 1633651 command_runner.go:130] > # Additional environment variables to set for all the
	I1216 06:32:25.251898 1633651 command_runner.go:130] > # containers. These are overridden if set in the
	I1216 06:32:25.251906 1633651 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1216 06:32:25.251910 1633651 command_runner.go:130] > # default_env = [
	I1216 06:32:25.251931 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.251956 1633651 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1216 06:32:25.251970 1633651 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1216 06:32:25.251982 1633651 command_runner.go:130] > # selinux = false
	I1216 06:32:25.251995 1633651 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1216 06:32:25.252003 1633651 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1216 06:32:25.252037 1633651 command_runner.go:130] > # This option supports live configuration reload.
	I1216 06:32:25.252047 1633651 command_runner.go:130] > # seccomp_profile = ""
	I1216 06:32:25.252055 1633651 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1216 06:32:25.252060 1633651 command_runner.go:130] > # This option supports live configuration reload.
	I1216 06:32:25.252066 1633651 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1216 06:32:25.252073 1633651 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1216 06:32:25.252082 1633651 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1216 06:32:25.252088 1633651 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1216 06:32:25.252097 1633651 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1216 06:32:25.252125 1633651 command_runner.go:130] > # This option supports live configuration reload.
	I1216 06:32:25.252136 1633651 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1216 06:32:25.252147 1633651 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1216 06:32:25.252161 1633651 command_runner.go:130] > # the cgroup blockio controller.
	I1216 06:32:25.252165 1633651 command_runner.go:130] > # blockio_config_file = ""
	I1216 06:32:25.252172 1633651 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1216 06:32:25.252176 1633651 command_runner.go:130] > # blockio parameters.
	I1216 06:32:25.252182 1633651 command_runner.go:130] > # blockio_reload = false
	I1216 06:32:25.252207 1633651 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1216 06:32:25.252224 1633651 command_runner.go:130] > # irqbalance daemon.
	I1216 06:32:25.252230 1633651 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1216 06:32:25.252251 1633651 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1216 06:32:25.252260 1633651 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1216 06:32:25.252270 1633651 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1216 06:32:25.252276 1633651 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1216 06:32:25.252283 1633651 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1216 06:32:25.252291 1633651 command_runner.go:130] > # This option supports live configuration reload.
	I1216 06:32:25.252295 1633651 command_runner.go:130] > # rdt_config_file = ""
	I1216 06:32:25.252300 1633651 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1216 06:32:25.252305 1633651 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1216 06:32:25.252321 1633651 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1216 06:32:25.252339 1633651 command_runner.go:130] > # separate_pull_cgroup = ""
	I1216 06:32:25.252356 1633651 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1216 06:32:25.252372 1633651 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1216 06:32:25.252380 1633651 command_runner.go:130] > # will be added.
	I1216 06:32:25.252385 1633651 command_runner.go:130] > # default_capabilities = [
	I1216 06:32:25.252388 1633651 command_runner.go:130] > # 	"CHOWN",
	I1216 06:32:25.252392 1633651 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1216 06:32:25.252405 1633651 command_runner.go:130] > # 	"FSETID",
	I1216 06:32:25.252411 1633651 command_runner.go:130] > # 	"FOWNER",
	I1216 06:32:25.252415 1633651 command_runner.go:130] > # 	"SETGID",
	I1216 06:32:25.252431 1633651 command_runner.go:130] > # 	"SETUID",
	I1216 06:32:25.252493 1633651 command_runner.go:130] > # 	"SETPCAP",
	I1216 06:32:25.252505 1633651 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1216 06:32:25.252509 1633651 command_runner.go:130] > # 	"KILL",
	I1216 06:32:25.252512 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.252520 1633651 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1216 06:32:25.252530 1633651 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1216 06:32:25.252534 1633651 command_runner.go:130] > # add_inheritable_capabilities = false
	I1216 06:32:25.252541 1633651 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1216 06:32:25.252547 1633651 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1216 06:32:25.252564 1633651 command_runner.go:130] > default_sysctls = [
	I1216 06:32:25.252577 1633651 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1216 06:32:25.252581 1633651 command_runner.go:130] > ]
	I1216 06:32:25.252587 1633651 command_runner.go:130] > # List of devices on the host that a
	I1216 06:32:25.252597 1633651 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1216 06:32:25.252601 1633651 command_runner.go:130] > # allowed_devices = [
	I1216 06:32:25.252605 1633651 command_runner.go:130] > # 	"/dev/fuse",
	I1216 06:32:25.252610 1633651 command_runner.go:130] > # 	"/dev/net/tun",
	I1216 06:32:25.252613 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.252624 1633651 command_runner.go:130] > # List of additional devices. specified as
	I1216 06:32:25.252649 1633651 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1216 06:32:25.252661 1633651 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1216 06:32:25.252667 1633651 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1216 06:32:25.252677 1633651 command_runner.go:130] > # additional_devices = [
	I1216 06:32:25.252685 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.252691 1633651 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1216 06:32:25.252703 1633651 command_runner.go:130] > # cdi_spec_dirs = [
	I1216 06:32:25.252716 1633651 command_runner.go:130] > # 	"/etc/cdi",
	I1216 06:32:25.252739 1633651 command_runner.go:130] > # 	"/var/run/cdi",
	I1216 06:32:25.252743 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.252750 1633651 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1216 06:32:25.252759 1633651 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1216 06:32:25.252769 1633651 command_runner.go:130] > # Defaults to false.
	I1216 06:32:25.252779 1633651 command_runner.go:130] > # device_ownership_from_security_context = false
	I1216 06:32:25.252786 1633651 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1216 06:32:25.252792 1633651 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1216 06:32:25.252807 1633651 command_runner.go:130] > # hooks_dir = [
	I1216 06:32:25.252819 1633651 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1216 06:32:25.252823 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.252829 1633651 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1216 06:32:25.252851 1633651 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1216 06:32:25.252857 1633651 command_runner.go:130] > # its default mounts from the following two files:
	I1216 06:32:25.252863 1633651 command_runner.go:130] > #
	I1216 06:32:25.252870 1633651 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1216 06:32:25.252876 1633651 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1216 06:32:25.252882 1633651 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1216 06:32:25.252886 1633651 command_runner.go:130] > #
	I1216 06:32:25.252893 1633651 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1216 06:32:25.252917 1633651 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1216 06:32:25.252940 1633651 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1216 06:32:25.252947 1633651 command_runner.go:130] > #      only add mounts it finds in this file.
	I1216 06:32:25.252950 1633651 command_runner.go:130] > #
	I1216 06:32:25.252955 1633651 command_runner.go:130] > # default_mounts_file = ""
	I1216 06:32:25.252963 1633651 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1216 06:32:25.252970 1633651 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1216 06:32:25.252977 1633651 command_runner.go:130] > # pids_limit = -1
	I1216 06:32:25.252989 1633651 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1216 06:32:25.253005 1633651 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1216 06:32:25.253018 1633651 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1216 06:32:25.253043 1633651 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1216 06:32:25.253055 1633651 command_runner.go:130] > # log_size_max = -1
	I1216 06:32:25.253064 1633651 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1216 06:32:25.253068 1633651 command_runner.go:130] > # log_to_journald = false
	I1216 06:32:25.253080 1633651 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1216 06:32:25.253090 1633651 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1216 06:32:25.253096 1633651 command_runner.go:130] > # Path to directory for container attach sockets.
	I1216 06:32:25.253101 1633651 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1216 06:32:25.253123 1633651 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1216 06:32:25.253128 1633651 command_runner.go:130] > # bind_mount_prefix = ""
	I1216 06:32:25.253151 1633651 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1216 06:32:25.253157 1633651 command_runner.go:130] > # read_only = false
	I1216 06:32:25.253169 1633651 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1216 06:32:25.253183 1633651 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1216 06:32:25.253188 1633651 command_runner.go:130] > # live configuration reload.
	I1216 06:32:25.253196 1633651 command_runner.go:130] > # log_level = "info"
	I1216 06:32:25.253219 1633651 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1216 06:32:25.253232 1633651 command_runner.go:130] > # This option supports live configuration reload.
	I1216 06:32:25.253236 1633651 command_runner.go:130] > # log_filter = ""
	I1216 06:32:25.253252 1633651 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1216 06:32:25.253264 1633651 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1216 06:32:25.253273 1633651 command_runner.go:130] > # separated by comma.
	I1216 06:32:25.253281 1633651 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1216 06:32:25.253287 1633651 command_runner.go:130] > # uid_mappings = ""
	I1216 06:32:25.253293 1633651 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1216 06:32:25.253300 1633651 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1216 06:32:25.253311 1633651 command_runner.go:130] > # separated by comma.
	I1216 06:32:25.253328 1633651 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1216 06:32:25.253340 1633651 command_runner.go:130] > # gid_mappings = ""
	I1216 06:32:25.253346 1633651 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1216 06:32:25.253362 1633651 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1216 06:32:25.253369 1633651 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1216 06:32:25.253377 1633651 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1216 06:32:25.253385 1633651 command_runner.go:130] > # minimum_mappable_uid = -1
	I1216 06:32:25.253391 1633651 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1216 06:32:25.253408 1633651 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1216 06:32:25.253421 1633651 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1216 06:32:25.253438 1633651 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1216 06:32:25.253448 1633651 command_runner.go:130] > # minimum_mappable_gid = -1
	I1216 06:32:25.253459 1633651 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1216 06:32:25.253468 1633651 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1216 06:32:25.253475 1633651 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1216 06:32:25.253481 1633651 command_runner.go:130] > # ctr_stop_timeout = 30
	I1216 06:32:25.253487 1633651 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1216 06:32:25.253493 1633651 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1216 06:32:25.253518 1633651 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1216 06:32:25.253530 1633651 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1216 06:32:25.253541 1633651 command_runner.go:130] > # drop_infra_ctr = true
	I1216 06:32:25.253557 1633651 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1216 06:32:25.253566 1633651 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1216 06:32:25.253573 1633651 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1216 06:32:25.253581 1633651 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1216 06:32:25.253607 1633651 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1216 06:32:25.253614 1633651 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1216 06:32:25.253630 1633651 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1216 06:32:25.253643 1633651 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1216 06:32:25.253647 1633651 command_runner.go:130] > # shared_cpuset = ""
	I1216 06:32:25.253653 1633651 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1216 06:32:25.253666 1633651 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1216 06:32:25.253670 1633651 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1216 06:32:25.253681 1633651 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1216 06:32:25.253688 1633651 command_runner.go:130] > # pinns_path = ""
	I1216 06:32:25.253694 1633651 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1216 06:32:25.253718 1633651 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1216 06:32:25.253731 1633651 command_runner.go:130] > # enable_criu_support = true
	I1216 06:32:25.253736 1633651 command_runner.go:130] > # Enable/disable the generation of the container,
	I1216 06:32:25.253754 1633651 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1216 06:32:25.253764 1633651 command_runner.go:130] > # enable_pod_events = false
	I1216 06:32:25.253771 1633651 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1216 06:32:25.253776 1633651 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1216 06:32:25.253786 1633651 command_runner.go:130] > # default_runtime = "crun"
	I1216 06:32:25.253795 1633651 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1216 06:32:25.253803 1633651 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1216 06:32:25.253814 1633651 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1216 06:32:25.253835 1633651 command_runner.go:130] > # creation as a file is not desired either.
	I1216 06:32:25.253853 1633651 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1216 06:32:25.253868 1633651 command_runner.go:130] > # the hostname is being managed dynamically.
	I1216 06:32:25.253876 1633651 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1216 06:32:25.253879 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.253885 1633651 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1216 06:32:25.253891 1633651 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1216 06:32:25.253923 1633651 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1216 06:32:25.253938 1633651 command_runner.go:130] > # Each entry in the table should follow the format:
	I1216 06:32:25.253941 1633651 command_runner.go:130] > #
	I1216 06:32:25.253946 1633651 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1216 06:32:25.253955 1633651 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1216 06:32:25.253959 1633651 command_runner.go:130] > # runtime_type = "oci"
	I1216 06:32:25.253977 1633651 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1216 06:32:25.253987 1633651 command_runner.go:130] > # inherit_default_runtime = false
	I1216 06:32:25.254007 1633651 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1216 06:32:25.254012 1633651 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1216 06:32:25.254016 1633651 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1216 06:32:25.254020 1633651 command_runner.go:130] > # monitor_env = []
	I1216 06:32:25.254034 1633651 command_runner.go:130] > # privileged_without_host_devices = false
	I1216 06:32:25.254044 1633651 command_runner.go:130] > # allowed_annotations = []
	I1216 06:32:25.254060 1633651 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1216 06:32:25.254072 1633651 command_runner.go:130] > # no_sync_log = false
	I1216 06:32:25.254076 1633651 command_runner.go:130] > # default_annotations = {}
	I1216 06:32:25.254081 1633651 command_runner.go:130] > # stream_websockets = false
	I1216 06:32:25.254088 1633651 command_runner.go:130] > # seccomp_profile = ""
	I1216 06:32:25.254142 1633651 command_runner.go:130] > # Where:
	I1216 06:32:25.254155 1633651 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1216 06:32:25.254162 1633651 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1216 06:32:25.254179 1633651 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1216 06:32:25.254193 1633651 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1216 06:32:25.254197 1633651 command_runner.go:130] > #   in $PATH.
	I1216 06:32:25.254203 1633651 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1216 06:32:25.254216 1633651 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1216 06:32:25.254223 1633651 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1216 06:32:25.254226 1633651 command_runner.go:130] > #   state.
	I1216 06:32:25.254232 1633651 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1216 06:32:25.254254 1633651 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1216 06:32:25.254272 1633651 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1216 06:32:25.254285 1633651 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1216 06:32:25.254290 1633651 command_runner.go:130] > #   the values from the default runtime on load time.
	I1216 06:32:25.254302 1633651 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1216 06:32:25.254311 1633651 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1216 06:32:25.254317 1633651 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1216 06:32:25.254340 1633651 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1216 06:32:25.254347 1633651 command_runner.go:130] > #   The currently recognized values are:
	I1216 06:32:25.254369 1633651 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1216 06:32:25.254378 1633651 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1216 06:32:25.254387 1633651 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1216 06:32:25.254393 1633651 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1216 06:32:25.254405 1633651 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1216 06:32:25.254419 1633651 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1216 06:32:25.254436 1633651 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1216 06:32:25.254450 1633651 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1216 06:32:25.254456 1633651 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1216 06:32:25.254476 1633651 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1216 06:32:25.254491 1633651 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1216 06:32:25.254498 1633651 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1216 06:32:25.254509 1633651 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1216 06:32:25.254520 1633651 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1216 06:32:25.254530 1633651 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1216 06:32:25.254561 1633651 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1216 06:32:25.254585 1633651 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1216 06:32:25.254596 1633651 command_runner.go:130] > #   deprecated option "conmon".
	I1216 06:32:25.254603 1633651 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1216 06:32:25.254613 1633651 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1216 06:32:25.254624 1633651 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1216 06:32:25.254629 1633651 command_runner.go:130] > #   should be moved to the container's cgroup
	I1216 06:32:25.254639 1633651 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1216 06:32:25.254660 1633651 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1216 06:32:25.254668 1633651 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1216 06:32:25.254672 1633651 command_runner.go:130] > #   conmon-rs by using:
	I1216 06:32:25.254689 1633651 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1216 06:32:25.254709 1633651 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1216 06:32:25.254724 1633651 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1216 06:32:25.254731 1633651 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1216 06:32:25.254739 1633651 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1216 06:32:25.254746 1633651 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1216 06:32:25.254767 1633651 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1216 06:32:25.254780 1633651 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1216 06:32:25.254799 1633651 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1216 06:32:25.254817 1633651 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1216 06:32:25.254822 1633651 command_runner.go:130] > #   when a machine crash happens.
	I1216 06:32:25.254829 1633651 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1216 06:32:25.254840 1633651 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1216 06:32:25.254848 1633651 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1216 06:32:25.254855 1633651 command_runner.go:130] > #   seccomp profile for the runtime.
	I1216 06:32:25.254861 1633651 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1216 06:32:25.254884 1633651 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1216 06:32:25.254894 1633651 command_runner.go:130] > #
	I1216 06:32:25.254899 1633651 command_runner.go:130] > # Using the seccomp notifier feature:
	I1216 06:32:25.254902 1633651 command_runner.go:130] > #
	I1216 06:32:25.254922 1633651 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1216 06:32:25.254936 1633651 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1216 06:32:25.254939 1633651 command_runner.go:130] > #
	I1216 06:32:25.254946 1633651 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1216 06:32:25.254954 1633651 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1216 06:32:25.254957 1633651 command_runner.go:130] > #
	I1216 06:32:25.254964 1633651 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1216 06:32:25.254970 1633651 command_runner.go:130] > # feature.
	I1216 06:32:25.254973 1633651 command_runner.go:130] > #
	I1216 06:32:25.254979 1633651 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1216 06:32:25.255001 1633651 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1216 06:32:25.255015 1633651 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1216 06:32:25.255021 1633651 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1216 06:32:25.255037 1633651 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1216 06:32:25.255046 1633651 command_runner.go:130] > #
	I1216 06:32:25.255053 1633651 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1216 06:32:25.255059 1633651 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1216 06:32:25.255065 1633651 command_runner.go:130] > #
	I1216 06:32:25.255071 1633651 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1216 06:32:25.255076 1633651 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1216 06:32:25.255079 1633651 command_runner.go:130] > #
	I1216 06:32:25.255089 1633651 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1216 06:32:25.255098 1633651 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1216 06:32:25.255116 1633651 command_runner.go:130] > # limitation.
	I1216 06:32:25.255127 1633651 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1216 06:32:25.255133 1633651 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1216 06:32:25.255143 1633651 command_runner.go:130] > runtime_type = ""
	I1216 06:32:25.255151 1633651 command_runner.go:130] > runtime_root = "/run/crun"
	I1216 06:32:25.255155 1633651 command_runner.go:130] > inherit_default_runtime = false
	I1216 06:32:25.255165 1633651 command_runner.go:130] > runtime_config_path = ""
	I1216 06:32:25.255174 1633651 command_runner.go:130] > container_min_memory = ""
	I1216 06:32:25.255210 1633651 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1216 06:32:25.255222 1633651 command_runner.go:130] > monitor_cgroup = "pod"
	I1216 06:32:25.255226 1633651 command_runner.go:130] > monitor_exec_cgroup = ""
	I1216 06:32:25.255231 1633651 command_runner.go:130] > allowed_annotations = [
	I1216 06:32:25.255235 1633651 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1216 06:32:25.255238 1633651 command_runner.go:130] > ]
	I1216 06:32:25.255247 1633651 command_runner.go:130] > privileged_without_host_devices = false
	I1216 06:32:25.255251 1633651 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1216 06:32:25.255267 1633651 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1216 06:32:25.255271 1633651 command_runner.go:130] > runtime_type = ""
	I1216 06:32:25.255274 1633651 command_runner.go:130] > runtime_root = "/run/runc"
	I1216 06:32:25.255290 1633651 command_runner.go:130] > inherit_default_runtime = false
	I1216 06:32:25.255300 1633651 command_runner.go:130] > runtime_config_path = ""
	I1216 06:32:25.255305 1633651 command_runner.go:130] > container_min_memory = ""
	I1216 06:32:25.255324 1633651 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1216 06:32:25.255354 1633651 command_runner.go:130] > monitor_cgroup = "pod"
	I1216 06:32:25.255360 1633651 command_runner.go:130] > monitor_exec_cgroup = ""
	I1216 06:32:25.255364 1633651 command_runner.go:130] > privileged_without_host_devices = false
	I1216 06:32:25.255371 1633651 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1216 06:32:25.255376 1633651 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1216 06:32:25.255383 1633651 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1216 06:32:25.255413 1633651 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1216 06:32:25.255438 1633651 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1216 06:32:25.255450 1633651 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1216 06:32:25.255462 1633651 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1216 06:32:25.255468 1633651 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1216 06:32:25.255478 1633651 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1216 06:32:25.255505 1633651 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1216 06:32:25.255522 1633651 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1216 06:32:25.255540 1633651 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1216 06:32:25.255551 1633651 command_runner.go:130] > # Example:
	I1216 06:32:25.255560 1633651 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1216 06:32:25.255569 1633651 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1216 06:32:25.255576 1633651 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1216 06:32:25.255584 1633651 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1216 06:32:25.255587 1633651 command_runner.go:130] > # cpuset = "0-1"
	I1216 06:32:25.255591 1633651 command_runner.go:130] > # cpushares = "5"
	I1216 06:32:25.255595 1633651 command_runner.go:130] > # cpuquota = "1000"
	I1216 06:32:25.255625 1633651 command_runner.go:130] > # cpuperiod = "100000"
	I1216 06:32:25.255636 1633651 command_runner.go:130] > # cpulimit = "35"
	I1216 06:32:25.255640 1633651 command_runner.go:130] > # Where:
	I1216 06:32:25.255645 1633651 command_runner.go:130] > # The workload name is workload-type.
	I1216 06:32:25.255652 1633651 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1216 06:32:25.255661 1633651 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1216 06:32:25.255667 1633651 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1216 06:32:25.255678 1633651 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1216 06:32:25.255686 1633651 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1216 06:32:25.255715 1633651 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1216 06:32:25.255733 1633651 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1216 06:32:25.255738 1633651 command_runner.go:130] > # Default value is set to true
	I1216 06:32:25.255749 1633651 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1216 06:32:25.255755 1633651 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1216 06:32:25.255760 1633651 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1216 06:32:25.255767 1633651 command_runner.go:130] > # Default value is set to 'false'
	I1216 06:32:25.255771 1633651 command_runner.go:130] > # disable_hostport_mapping = false
	I1216 06:32:25.255776 1633651 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1216 06:32:25.255807 1633651 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1216 06:32:25.255817 1633651 command_runner.go:130] > # timezone = ""
	I1216 06:32:25.255824 1633651 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1216 06:32:25.255830 1633651 command_runner.go:130] > #
	I1216 06:32:25.255836 1633651 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1216 06:32:25.255846 1633651 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1216 06:32:25.255850 1633651 command_runner.go:130] > [crio.image]
	I1216 06:32:25.255856 1633651 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1216 06:32:25.255866 1633651 command_runner.go:130] > # default_transport = "docker://"
	I1216 06:32:25.255888 1633651 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1216 06:32:25.255905 1633651 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1216 06:32:25.255915 1633651 command_runner.go:130] > # global_auth_file = ""
	I1216 06:32:25.255920 1633651 command_runner.go:130] > # The image used to instantiate infra containers.
	I1216 06:32:25.255925 1633651 command_runner.go:130] > # This option supports live configuration reload.
	I1216 06:32:25.255931 1633651 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1216 06:32:25.255940 1633651 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1216 06:32:25.255955 1633651 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1216 06:32:25.255961 1633651 command_runner.go:130] > # This option supports live configuration reload.
	I1216 06:32:25.255968 1633651 command_runner.go:130] > # pause_image_auth_file = ""
	I1216 06:32:25.255989 1633651 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1216 06:32:25.255997 1633651 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1216 06:32:25.256008 1633651 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1216 06:32:25.256014 1633651 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1216 06:32:25.256020 1633651 command_runner.go:130] > # pause_command = "/pause"
	I1216 06:32:25.256026 1633651 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1216 06:32:25.256032 1633651 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1216 06:32:25.256042 1633651 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1216 06:32:25.256057 1633651 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1216 06:32:25.256069 1633651 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1216 06:32:25.256085 1633651 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1216 06:32:25.256096 1633651 command_runner.go:130] > # pinned_images = [
	I1216 06:32:25.256100 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.256106 1633651 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1216 06:32:25.256116 1633651 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1216 06:32:25.256122 1633651 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1216 06:32:25.256131 1633651 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1216 06:32:25.256139 1633651 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1216 06:32:25.256144 1633651 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1216 06:32:25.256150 1633651 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1216 06:32:25.256179 1633651 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1216 06:32:25.256192 1633651 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1216 06:32:25.256207 1633651 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1216 06:32:25.256217 1633651 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1216 06:32:25.256222 1633651 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1216 06:32:25.256229 1633651 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1216 06:32:25.256238 1633651 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1216 06:32:25.256242 1633651 command_runner.go:130] > # changing them here.
	I1216 06:32:25.256266 1633651 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1216 06:32:25.256283 1633651 command_runner.go:130] > # insecure_registries = [
	I1216 06:32:25.256293 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.256303 1633651 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1216 06:32:25.256311 1633651 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1216 06:32:25.256321 1633651 command_runner.go:130] > # image_volumes = "mkdir"
	I1216 06:32:25.256331 1633651 command_runner.go:130] > # Temporary directory to use for storing big files
	I1216 06:32:25.256347 1633651 command_runner.go:130] > # big_files_temporary_dir = ""
	I1216 06:32:25.256360 1633651 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1216 06:32:25.256372 1633651 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1216 06:32:25.256380 1633651 command_runner.go:130] > # auto_reload_registries = false
	I1216 06:32:25.256386 1633651 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1216 06:32:25.256395 1633651 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1216 06:32:25.256404 1633651 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1216 06:32:25.256408 1633651 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1216 06:32:25.256422 1633651 command_runner.go:130] > # The mode of short name resolution.
	I1216 06:32:25.256436 1633651 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1216 06:32:25.256452 1633651 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1216 06:32:25.256479 1633651 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1216 06:32:25.256484 1633651 command_runner.go:130] > # short_name_mode = "enforcing"
	I1216 06:32:25.256490 1633651 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1216 06:32:25.256497 1633651 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1216 06:32:25.256512 1633651 command_runner.go:130] > # oci_artifact_mount_support = true
	I1216 06:32:25.256532 1633651 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1216 06:32:25.256544 1633651 command_runner.go:130] > # CNI plugins.
	I1216 06:32:25.256548 1633651 command_runner.go:130] > [crio.network]
	I1216 06:32:25.256566 1633651 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1216 06:32:25.256583 1633651 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1216 06:32:25.256590 1633651 command_runner.go:130] > # cni_default_network = ""
	I1216 06:32:25.256596 1633651 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1216 06:32:25.256603 1633651 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1216 06:32:25.256610 1633651 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1216 06:32:25.256626 1633651 command_runner.go:130] > # plugin_dirs = [
	I1216 06:32:25.256650 1633651 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1216 06:32:25.256654 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.256678 1633651 command_runner.go:130] > # List of included pod metrics.
	I1216 06:32:25.256691 1633651 command_runner.go:130] > # included_pod_metrics = [
	I1216 06:32:25.256695 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.256701 1633651 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1216 06:32:25.256708 1633651 command_runner.go:130] > [crio.metrics]
	I1216 06:32:25.256712 1633651 command_runner.go:130] > # Globally enable or disable metrics support.
	I1216 06:32:25.256717 1633651 command_runner.go:130] > # enable_metrics = false
	I1216 06:32:25.256723 1633651 command_runner.go:130] > # Specify enabled metrics collectors.
	I1216 06:32:25.256728 1633651 command_runner.go:130] > # Per default all metrics are enabled.
	I1216 06:32:25.256737 1633651 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1216 06:32:25.256762 1633651 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1216 06:32:25.256774 1633651 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1216 06:32:25.256778 1633651 command_runner.go:130] > # metrics_collectors = [
	I1216 06:32:25.256799 1633651 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1216 06:32:25.256808 1633651 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1216 06:32:25.256813 1633651 command_runner.go:130] > # 	"containers_oom_total",
	I1216 06:32:25.256818 1633651 command_runner.go:130] > # 	"processes_defunct",
	I1216 06:32:25.256829 1633651 command_runner.go:130] > # 	"operations_total",
	I1216 06:32:25.256834 1633651 command_runner.go:130] > # 	"operations_latency_seconds",
	I1216 06:32:25.256839 1633651 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1216 06:32:25.256842 1633651 command_runner.go:130] > # 	"operations_errors_total",
	I1216 06:32:25.256847 1633651 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1216 06:32:25.256851 1633651 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1216 06:32:25.256855 1633651 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1216 06:32:25.256869 1633651 command_runner.go:130] > # 	"image_pulls_success_total",
	I1216 06:32:25.256888 1633651 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1216 06:32:25.256897 1633651 command_runner.go:130] > # 	"containers_oom_count_total",
	I1216 06:32:25.256901 1633651 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1216 06:32:25.256906 1633651 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1216 06:32:25.256913 1633651 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1216 06:32:25.256916 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.256923 1633651 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1216 06:32:25.256930 1633651 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1216 06:32:25.256944 1633651 command_runner.go:130] > # The port on which the metrics server will listen.
	I1216 06:32:25.256952 1633651 command_runner.go:130] > # metrics_port = 9090
	I1216 06:32:25.256958 1633651 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1216 06:32:25.256967 1633651 command_runner.go:130] > # metrics_socket = ""
	I1216 06:32:25.256972 1633651 command_runner.go:130] > # The certificate for the secure metrics server.
	I1216 06:32:25.256979 1633651 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1216 06:32:25.256987 1633651 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1216 06:32:25.257000 1633651 command_runner.go:130] > # certificate on any modification event.
	I1216 06:32:25.257004 1633651 command_runner.go:130] > # metrics_cert = ""
	I1216 06:32:25.257023 1633651 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1216 06:32:25.257034 1633651 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1216 06:32:25.257039 1633651 command_runner.go:130] > # metrics_key = ""
	I1216 06:32:25.257061 1633651 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1216 06:32:25.257070 1633651 command_runner.go:130] > [crio.tracing]
	I1216 06:32:25.257076 1633651 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1216 06:32:25.257080 1633651 command_runner.go:130] > # enable_tracing = false
	I1216 06:32:25.257088 1633651 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1216 06:32:25.257099 1633651 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1216 06:32:25.257111 1633651 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1216 06:32:25.257127 1633651 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1216 06:32:25.257138 1633651 command_runner.go:130] > # CRI-O NRI configuration.
	I1216 06:32:25.257142 1633651 command_runner.go:130] > [crio.nri]
	I1216 06:32:25.257156 1633651 command_runner.go:130] > # Globally enable or disable NRI.
	I1216 06:32:25.257167 1633651 command_runner.go:130] > # enable_nri = true
	I1216 06:32:25.257172 1633651 command_runner.go:130] > # NRI socket to listen on.
	I1216 06:32:25.257181 1633651 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1216 06:32:25.257193 1633651 command_runner.go:130] > # NRI plugin directory to use.
	I1216 06:32:25.257198 1633651 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1216 06:32:25.257205 1633651 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1216 06:32:25.257210 1633651 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1216 06:32:25.257218 1633651 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1216 06:32:25.257323 1633651 command_runner.go:130] > # nri_disable_connections = false
	I1216 06:32:25.257337 1633651 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1216 06:32:25.257342 1633651 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1216 06:32:25.257358 1633651 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1216 06:32:25.257370 1633651 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1216 06:32:25.257375 1633651 command_runner.go:130] > # NRI default validator configuration.
	I1216 06:32:25.257383 1633651 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1216 06:32:25.257393 1633651 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1216 06:32:25.257397 1633651 command_runner.go:130] > # can be restricted/rejected:
	I1216 06:32:25.257403 1633651 command_runner.go:130] > # - OCI hook injection
	I1216 06:32:25.257409 1633651 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1216 06:32:25.257417 1633651 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1216 06:32:25.257431 1633651 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1216 06:32:25.257443 1633651 command_runner.go:130] > # - adjustment of linux namespaces
	I1216 06:32:25.257465 1633651 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1216 06:32:25.257479 1633651 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1216 06:32:25.257485 1633651 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1216 06:32:25.257493 1633651 command_runner.go:130] > #
	I1216 06:32:25.257498 1633651 command_runner.go:130] > # [crio.nri.default_validator]
	I1216 06:32:25.257503 1633651 command_runner.go:130] > # nri_enable_default_validator = false
	I1216 06:32:25.257510 1633651 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1216 06:32:25.257516 1633651 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1216 06:32:25.257522 1633651 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1216 06:32:25.257549 1633651 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1216 06:32:25.257562 1633651 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1216 06:32:25.257568 1633651 command_runner.go:130] > # nri_validator_required_plugins = [
	I1216 06:32:25.257574 1633651 command_runner.go:130] > # ]
	I1216 06:32:25.257593 1633651 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1216 06:32:25.257604 1633651 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1216 06:32:25.257609 1633651 command_runner.go:130] > [crio.stats]
	I1216 06:32:25.257639 1633651 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1216 06:32:25.257651 1633651 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1216 06:32:25.257655 1633651 command_runner.go:130] > # stats_collection_period = 0
	I1216 06:32:25.257662 1633651 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1216 06:32:25.257671 1633651 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1216 06:32:25.257675 1633651 command_runner.go:130] > # collection_period = 0
	I1216 06:32:25.259482 1633651 command_runner.go:130] ! time="2025-12-16T06:32:25.219727326Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1216 06:32:25.259512 1633651 command_runner.go:130] ! time="2025-12-16T06:32:25.219767515Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1216 06:32:25.259524 1633651 command_runner.go:130] ! time="2025-12-16T06:32:25.219798038Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1216 06:32:25.259536 1633651 command_runner.go:130] ! time="2025-12-16T06:32:25.219823548Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1216 06:32:25.259545 1633651 command_runner.go:130] ! time="2025-12-16T06:32:25.219901653Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:32:25.259556 1633651 command_runner.go:130] ! time="2025-12-16T06:32:25.220263616Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1216 06:32:25.259571 1633651 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1216 06:32:25.260036 1633651 cni.go:84] Creating CNI manager for ""
	I1216 06:32:25.260064 1633651 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 06:32:25.260092 1633651 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 06:32:25.260122 1633651 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-364120 NodeName:functional-364120 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 06:32:25.260297 1633651 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-364120"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 06:32:25.260383 1633651 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 06:32:25.268343 1633651 command_runner.go:130] > kubeadm
	I1216 06:32:25.268362 1633651 command_runner.go:130] > kubectl
	I1216 06:32:25.268366 1633651 command_runner.go:130] > kubelet
	I1216 06:32:25.268406 1633651 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 06:32:25.268462 1633651 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 06:32:25.276071 1633651 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1216 06:32:25.288575 1633651 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 06:32:25.300994 1633651 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1216 06:32:25.313670 1633651 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 06:32:25.317448 1633651 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1216 06:32:25.317550 1633651 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:32:25.453328 1633651 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:32:26.148228 1633651 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120 for IP: 192.168.49.2
	I1216 06:32:26.148252 1633651 certs.go:195] generating shared ca certs ...
	I1216 06:32:26.148269 1633651 certs.go:227] acquiring lock for ca certs: {Name:mkbf72d2e438185e2867d262e148d82e5455cccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:32:26.148410 1633651 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key
	I1216 06:32:26.148482 1633651 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key
	I1216 06:32:26.148493 1633651 certs.go:257] generating profile certs ...
	I1216 06:32:26.148601 1633651 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.key
	I1216 06:32:26.148663 1633651 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.key.a6be103a
	I1216 06:32:26.148727 1633651 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.key
	I1216 06:32:26.148740 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1216 06:32:26.148753 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1216 06:32:26.148765 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1216 06:32:26.148785 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1216 06:32:26.148802 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1216 06:32:26.148814 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1216 06:32:26.148830 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1216 06:32:26.148841 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1216 06:32:26.148892 1633651 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem (1338 bytes)
	W1216 06:32:26.148927 1633651 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255_empty.pem, impossibly tiny 0 bytes
	I1216 06:32:26.148935 1633651 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 06:32:26.148966 1633651 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem (1078 bytes)
	I1216 06:32:26.148996 1633651 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem (1123 bytes)
	I1216 06:32:26.149023 1633651 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem (1675 bytes)
	I1216 06:32:26.149078 1633651 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 06:32:26.149109 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> /usr/share/ca-certificates/15992552.pem
	I1216 06:32:26.149127 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:32:26.149143 1633651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem -> /usr/share/ca-certificates/1599255.pem
	I1216 06:32:26.149727 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 06:32:26.167732 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 06:32:26.185872 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 06:32:26.203036 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 06:32:26.220347 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 06:32:26.238248 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 06:32:26.255572 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 06:32:26.272719 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 06:32:26.290975 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /usr/share/ca-certificates/15992552.pem (1708 bytes)
	I1216 06:32:26.308752 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 06:32:26.326261 1633651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem --> /usr/share/ca-certificates/1599255.pem (1338 bytes)
	I1216 06:32:26.344085 1633651 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 06:32:26.357043 1633651 ssh_runner.go:195] Run: openssl version
	I1216 06:32:26.362895 1633651 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1216 06:32:26.363366 1633651 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15992552.pem
	I1216 06:32:26.370980 1633651 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15992552.pem /etc/ssl/certs/15992552.pem
	I1216 06:32:26.378519 1633651 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15992552.pem
	I1216 06:32:26.382213 1633651 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 16 06:24 /usr/share/ca-certificates/15992552.pem
	I1216 06:32:26.382261 1633651 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 06:24 /usr/share/ca-certificates/15992552.pem
	I1216 06:32:26.382313 1633651 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15992552.pem
	I1216 06:32:26.422786 1633651 command_runner.go:130] > 3ec20f2e
	I1216 06:32:26.423247 1633651 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 06:32:26.430703 1633651 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:32:26.437977 1633651 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 06:32:26.445376 1633651 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:32:26.449306 1633651 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 16 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:32:26.449352 1633651 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:32:26.449400 1633651 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:32:26.489732 1633651 command_runner.go:130] > b5213941
	I1216 06:32:26.490221 1633651 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 06:32:26.498231 1633651 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1599255.pem
	I1216 06:32:26.505778 1633651 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1599255.pem /etc/ssl/certs/1599255.pem
	I1216 06:32:26.513624 1633651 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1599255.pem
	I1216 06:32:26.517603 1633651 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 16 06:24 /usr/share/ca-certificates/1599255.pem
	I1216 06:32:26.517655 1633651 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 06:24 /usr/share/ca-certificates/1599255.pem
	I1216 06:32:26.517708 1633651 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1599255.pem
	I1216 06:32:26.558501 1633651 command_runner.go:130] > 51391683
	I1216 06:32:26.558962 1633651 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 06:32:26.566709 1633651 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 06:32:26.570687 1633651 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 06:32:26.570714 1633651 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1216 06:32:26.570721 1633651 command_runner.go:130] > Device: 259,1	Inode: 1064557     Links: 1
	I1216 06:32:26.570728 1633651 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1216 06:32:26.570734 1633651 command_runner.go:130] > Access: 2025-12-16 06:28:17.989070314 +0000
	I1216 06:32:26.570739 1633651 command_runner.go:130] > Modify: 2025-12-16 06:24:14.133380006 +0000
	I1216 06:32:26.570745 1633651 command_runner.go:130] > Change: 2025-12-16 06:24:14.133380006 +0000
	I1216 06:32:26.570750 1633651 command_runner.go:130] >  Birth: 2025-12-16 06:24:14.133380006 +0000
	I1216 06:32:26.570807 1633651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 06:32:26.611178 1633651 command_runner.go:130] > Certificate will not expire
	I1216 06:32:26.611643 1633651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 06:32:26.653044 1633651 command_runner.go:130] > Certificate will not expire
	I1216 06:32:26.653496 1633651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 06:32:26.693948 1633651 command_runner.go:130] > Certificate will not expire
	I1216 06:32:26.694452 1633651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 06:32:26.737177 1633651 command_runner.go:130] > Certificate will not expire
	I1216 06:32:26.737685 1633651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 06:32:26.777863 1633651 command_runner.go:130] > Certificate will not expire
	I1216 06:32:26.778315 1633651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 06:32:26.821770 1633651 command_runner.go:130] > Certificate will not expire
	I1216 06:32:26.822198 1633651 kubeadm.go:401] StartCluster: {Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:32:26.822282 1633651 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 06:32:26.822342 1633651 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 06:32:26.848560 1633651 cri.go:89] found id: ""
	I1216 06:32:26.848631 1633651 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 06:32:26.856311 1633651 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1216 06:32:26.856334 1633651 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1216 06:32:26.856341 1633651 command_runner.go:130] > /var/lib/minikube/etcd:
	I1216 06:32:26.856353 1633651 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 06:32:26.856377 1633651 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 06:32:26.856451 1633651 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 06:32:26.863716 1633651 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 06:32:26.864139 1633651 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-364120" does not appear in /home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:32:26.864257 1633651 kubeconfig.go:62] /home/jenkins/minikube-integration/22141-1596013/kubeconfig needs updating (will repair): [kubeconfig missing "functional-364120" cluster setting kubeconfig missing "functional-364120" context setting]
	I1216 06:32:26.864570 1633651 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/kubeconfig: {Name:mk61a8e87d869d27c5acc78145bae6b02a8088a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:32:26.865235 1633651 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:32:26.865467 1633651 kapi.go:59] client config for functional-364120: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.crt", KeyFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.key", CAFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 06:32:26.866570 1633651 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1216 06:32:26.866631 1633651 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1216 06:32:26.866668 1633651 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 06:32:26.866693 1633651 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1216 06:32:26.866720 1633651 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 06:32:26.867179 1633651 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 06:32:26.868151 1633651 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1216 06:32:26.877051 1633651 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1216 06:32:26.877090 1633651 kubeadm.go:602] duration metric: took 20.700092ms to restartPrimaryControlPlane
	I1216 06:32:26.877101 1633651 kubeadm.go:403] duration metric: took 54.908954ms to StartCluster
	I1216 06:32:26.877118 1633651 settings.go:142] acquiring lock: {Name:mk011eec7aa10b3db81dce3dc7edf51f985e2ce2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:32:26.877187 1633651 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:32:26.877859 1633651 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/kubeconfig: {Name:mk61a8e87d869d27c5acc78145bae6b02a8088a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:32:26.878064 1633651 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 06:32:26.878625 1633651 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 06:32:26.878682 1633651 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 06:32:26.878749 1633651 addons.go:70] Setting storage-provisioner=true in profile "functional-364120"
	I1216 06:32:26.878762 1633651 addons.go:239] Setting addon storage-provisioner=true in "functional-364120"
	I1216 06:32:26.878787 1633651 host.go:66] Checking if "functional-364120" exists ...
	I1216 06:32:26.879288 1633651 cli_runner.go:164] Run: docker container inspect functional-364120 --format={{.State.Status}}
	I1216 06:32:26.879473 1633651 addons.go:70] Setting default-storageclass=true in profile "functional-364120"
	I1216 06:32:26.879497 1633651 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-364120"
	I1216 06:32:26.879803 1633651 cli_runner.go:164] Run: docker container inspect functional-364120 --format={{.State.Status}}
	I1216 06:32:26.884633 1633651 out.go:179] * Verifying Kubernetes components...
	I1216 06:32:26.887314 1633651 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:32:26.918200 1633651 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 06:32:26.919874 1633651 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:32:26.920155 1633651 kapi.go:59] client config for functional-364120: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.crt", KeyFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.key", CAFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 06:32:26.920453 1633651 addons.go:239] Setting addon default-storageclass=true in "functional-364120"
	I1216 06:32:26.920538 1633651 host.go:66] Checking if "functional-364120" exists ...
	I1216 06:32:26.920986 1633651 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:26.921004 1633651 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 06:32:26.921061 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:26.921340 1633651 cli_runner.go:164] Run: docker container inspect functional-364120 --format={{.State.Status}}
	I1216 06:32:26.964659 1633651 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:26.964697 1633651 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 06:32:26.964756 1633651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:32:26.965286 1633651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:32:26.998084 1633651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:32:27.098293 1633651 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:32:27.125997 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:27.132422 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:27.897996 1633651 node_ready.go:35] waiting up to 6m0s for node "functional-364120" to be "Ready" ...
	I1216 06:32:27.898129 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:27.898194 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:27.898417 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:27.898455 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:27.898484 1633651 retry.go:31] will retry after 293.203887ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:27.898523 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:27.898548 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:27.898555 1633651 retry.go:31] will retry after 361.667439ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:27.898617 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:28.192028 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:28.251245 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:28.251292 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:28.251318 1633651 retry.go:31] will retry after 421.770055ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:28.261399 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:28.326104 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:28.326166 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:28.326190 1633651 retry.go:31] will retry after 230.03946ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:28.398272 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:28.398369 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:28.398664 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:28.557150 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:28.610627 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:28.614370 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:28.614405 1633651 retry.go:31] will retry after 431.515922ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:28.673577 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:28.751124 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:28.751167 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:28.751187 1633651 retry.go:31] will retry after 416.921651ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:28.898406 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:28.898526 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:28.898876 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:29.046157 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:29.107254 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:29.107314 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:29.107371 1633651 retry.go:31] will retry after 899.303578ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:29.168518 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:29.225793 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:29.229337 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:29.229371 1633651 retry.go:31] will retry after 758.152445ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:29.398643 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:29.398767 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:29.399082 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:29.898862 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:29.898939 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:29.899317 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:29.899390 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:29.988648 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:30.011610 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:30.113177 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:30.113245 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:30.113269 1633651 retry.go:31] will retry after 739.984539ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:30.134431 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:30.134488 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:30.134525 1633651 retry.go:31] will retry after 743.078754ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:30.398873 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:30.398944 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:30.399345 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:30.854128 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:30.878717 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:30.899202 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:30.899283 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:30.899567 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:30.948589 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:30.948629 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:30.948651 1633651 retry.go:31] will retry after 2.54132752s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:30.989038 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:30.989082 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:30.989107 1633651 retry.go:31] will retry after 1.925489798s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:31.398656 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:31.398729 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:31.399083 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:31.898637 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:31.898714 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:31.899058 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:32.398954 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:32.399038 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:32.399384 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:32.399469 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:32.898198 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:32.898298 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:32.898691 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:32.914948 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:32.974729 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:32.974766 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:32.974784 1633651 retry.go:31] will retry after 2.13279976s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:33.398213 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:33.398308 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:33.398682 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:33.491042 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:33.546485 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:33.550699 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:33.550734 1633651 retry.go:31] will retry after 1.927615537s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:33.899219 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:33.899329 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:33.899638 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:34.398293 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:34.398367 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:34.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:34.898296 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:34.898376 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:34.898683 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:34.898732 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:35.108136 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:35.168080 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:35.168179 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:35.168237 1633651 retry.go:31] will retry after 2.609957821s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:35.398216 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:35.398310 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:35.398589 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:35.478854 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:35.539410 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:35.539453 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:35.539472 1633651 retry.go:31] will retry after 2.66810674s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:35.898940 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:35.899019 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:35.899395 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:36.399231 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:36.399312 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:36.399638 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:36.898470 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:36.898542 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:36.898811 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:36.898864 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:37.398807 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:37.398884 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:37.399243 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:37.778747 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:37.833515 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:37.837237 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:37.837278 1633651 retry.go:31] will retry after 4.537651284s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:37.898560 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:37.898639 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:37.898976 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:38.208455 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:38.268308 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:38.268354 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:38.268373 1633651 retry.go:31] will retry after 8.612374195s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:38.398733 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:38.398807 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:38.399077 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:38.899000 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:38.899085 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:38.899556 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:38.899628 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:39.398306 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:39.398389 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:39.398769 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:39.898353 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:39.898421 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:39.898737 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:40.398303 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:40.398378 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:40.398718 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:40.898499 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:40.898578 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:40.898878 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:41.398243 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:41.398320 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:41.398608 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:41.398654 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:41.898265 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:41.898352 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:41.898706 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:42.375464 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:42.399185 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:42.399260 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:42.399531 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:42.439480 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:42.439520 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:42.439538 1633651 retry.go:31] will retry after 13.723834965s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:42.899110 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:42.899183 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:42.899457 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:43.398171 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:43.398246 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:43.398594 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:43.898302 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:43.898384 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:43.898716 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:43.898766 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:44.398246 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:44.398336 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:44.398652 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:44.898379 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:44.898453 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:44.898773 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:45.398303 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:45.398383 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:45.398795 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:45.898225 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:45.898296 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:45.898604 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:46.398309 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:46.398384 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:46.398732 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:46.398787 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:46.881536 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:46.898964 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:46.899056 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:46.899361 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:46.940375 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:46.943961 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:46.943995 1633651 retry.go:31] will retry after 5.072276608s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:47.398701 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:47.398787 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:47.399064 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:47.898839 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:47.898914 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:47.899236 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:48.398915 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:48.398993 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:48.399340 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:48.399397 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:48.898996 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:48.899069 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:48.899401 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:49.399214 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:49.399301 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:49.399707 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:49.898281 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:49.898365 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:49.898709 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:50.398392 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:50.398466 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:50.398735 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:50.898279 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:50.898378 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:50.898713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:50.898770 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:51.398286 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:51.398367 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:51.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:51.898253 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:51.898327 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:51.898592 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:52.017198 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:32:52.080330 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:52.080367 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:52.080387 1633651 retry.go:31] will retry after 19.488213597s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:52.398170 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:52.398254 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:52.398603 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:52.898357 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:52.898430 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:52.898751 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:52.898809 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:53.398443 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:53.398509 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:53.398780 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:53.898306 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:53.898387 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:53.898746 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:54.398455 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:54.398531 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:54.398859 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:54.898536 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:54.898616 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:54.898937 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:54.899000 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:55.398275 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:55.398355 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:55.398711 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:55.898280 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:55.898356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:55.898712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:56.164267 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:32:56.225232 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:32:56.225280 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:56.225300 1633651 retry.go:31] will retry after 14.108855756s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:32:56.398529 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:56.398594 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:56.398865 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:56.898855 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:56.898932 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:56.899282 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:56.899334 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:57.399213 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:57.399288 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:57.399591 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:57.898226 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:57.898296 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:57.898568 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:58.398287 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:58.398378 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:58.398747 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:58.898457 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:58.898545 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:58.898936 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:32:59.398231 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:59.398328 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:59.398650 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:32:59.398702 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:32:59.898313 1633651 type.go:168] "Request Body" body=""
	I1216 06:32:59.898388 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:32:59.898742 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:00.398460 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:00.398541 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:00.398851 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:00.898739 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:00.898816 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:00.899097 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:01.398863 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:01.398936 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:01.399252 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:01.399305 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:01.898923 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:01.899005 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:01.899364 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:02.399175 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:02.399247 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:02.399610 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:02.898189 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:02.898266 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:02.898584 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:03.398333 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:03.398410 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:03.398779 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:03.898460 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:03.898527 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:03.898800 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:03.898847 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:04.398287 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:04.398376 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:04.398745 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:04.898458 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:04.898534 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:04.898848 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:05.398531 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:05.398614 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:05.398881 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:05.898633 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:05.898709 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:05.899055 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:05.899137 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:06.398909 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:06.398987 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:06.399357 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:06.898176 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:06.898262 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:06.898675 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:07.398306 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:07.398386 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:07.398760 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:07.898344 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:07.898420 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:07.898721 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:08.398282 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:08.398349 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:08.398667 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:08.398725 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:08.898267 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:08.898349 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:08.898696 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:09.398398 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:09.398479 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:09.398785 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:09.898336 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:09.898404 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:09.898666 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:10.335122 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:33:10.396460 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:33:10.396519 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:33:10.396538 1633651 retry.go:31] will retry after 12.344116424s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:33:10.398561 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:10.398627 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:10.398890 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:10.398937 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:10.898605 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:10.898693 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:10.899053 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:11.398802 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:11.398885 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:11.399176 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:11.569711 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:33:11.631078 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:33:11.634606 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:33:11.634637 1633651 retry.go:31] will retry after 14.712851021s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:33:11.899031 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:11.899113 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:11.899432 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:12.398254 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:12.398360 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:12.398690 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:12.898240 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:12.898312 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:12.898566 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:12.898607 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:13.398287 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:13.398402 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:13.398698 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:13.898274 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:13.898358 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:13.898708 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:14.398404 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:14.398483 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:14.398747 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:14.898318 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:14.898393 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:14.898689 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:14.898742 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:15.398247 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:15.398323 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:15.398677 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:15.898226 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:15.898327 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:15.898637 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:16.398242 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:16.398318 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:16.398644 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:16.898635 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:16.898716 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:16.899100 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:16.899164 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:17.398918 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:17.399005 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:17.399287 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:17.899071 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:17.899230 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:17.899613 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:18.398204 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:18.398291 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:18.398652 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:18.898350 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:18.898425 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:18.898684 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:19.398280 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:19.398356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:19.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:19.398764 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:19.898239 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:19.898318 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:19.898648 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:20.398232 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:20.398306 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:20.398616 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:20.898284 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:20.898360 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:20.898678 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:21.398294 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:21.398375 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:21.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:21.898275 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:21.898388 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:21.898665 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:21.898722 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:22.398602 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:22.398676 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:22.399053 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:22.741700 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:33:22.805176 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:33:22.805212 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:33:22.805230 1633651 retry.go:31] will retry after 37.521073757s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:33:22.898475 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:22.898570 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:22.898876 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:23.398233 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:23.398311 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:23.398648 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:23.898274 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:23.898357 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:23.898694 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:23.898753 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:24.398440 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:24.398517 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:24.398859 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:24.898547 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:24.898618 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:24.898926 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:25.398269 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:25.398343 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:25.398672 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:25.898264 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:25.898341 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:25.898639 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:26.348396 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:33:26.398844 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:26.398921 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:26.399279 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:26.399329 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:26.417393 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:33:26.417436 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:33:26.417455 1633651 retry.go:31] will retry after 31.35447413s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:33:26.898149 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:26.898223 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:26.898585 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:27.398341 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:27.398414 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:27.398760 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:27.898330 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:27.898422 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:27.898845 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:28.398266 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:28.398345 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:28.398712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:28.898417 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:28.898496 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:28.898819 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:28.898872 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:29.398235 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:29.398307 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:29.398632 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:29.898239 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:29.898320 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:29.898683 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:30.398392 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:30.398475 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:30.398830 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:30.898474 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:30.898549 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:30.898811 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:31.398256 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:31.398330 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:31.398672 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:31.398725 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:31.898251 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:31.898324 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:31.898636 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:32.398372 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:32.398442 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:32.398727 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:32.898400 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:32.898485 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:32.898850 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:33.398289 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:33.398371 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:33.398711 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:33.398769 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:33.898438 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:33.898505 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:33.898773 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:34.398438 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:34.398516 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:34.398867 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:34.898456 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:34.898537 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:34.898909 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:35.398591 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:35.398658 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:35.398916 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:35.398977 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:35.898278 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:35.898358 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:35.898703 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:36.398279 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:36.398364 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:36.398729 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:36.898728 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:36.898803 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:36.899137 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:37.399202 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:37.399278 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:37.399639 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:37.399694 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:37.898374 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:37.898455 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:37.898821 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:38.398505 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:38.398571 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:38.398855 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:38.898265 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:38.898344 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:38.898677 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:39.398411 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:39.398486 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:39.398839 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:39.898222 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:39.898300 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:39.898615 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:39.898667 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:40.398263 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:40.398339 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:40.398681 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:40.898277 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:40.898359 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:40.898740 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:41.398462 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:41.398529 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:41.398809 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:41.898281 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:41.898354 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:41.898706 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:41.898766 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:42.398755 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:42.398839 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:42.399236 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:42.898983 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:42.899053 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:42.899331 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:43.398183 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:43.398258 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:43.398591 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:43.898308 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:43.898391 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:43.898742 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:44.398253 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:44.398321 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:44.398580 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:44.398622 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:44.898252 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:44.898325 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:44.898659 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:45.398342 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:45.398448 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:45.398787 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:45.898225 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:45.898296 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:45.898628 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:46.398273 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:46.398350 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:46.398686 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:46.398739 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:46.898513 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:46.898594 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:46.898959 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:47.398772 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:47.398859 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:47.399168 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:47.898938 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:47.899012 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:47.899377 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:48.399044 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:48.399126 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:48.399458 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:48.399514 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:48.898185 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:48.898255 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:48.898520 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:49.398231 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:49.398311 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:49.398630 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:49.898360 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:49.898434 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:49.898761 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:50.398248 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:50.398333 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:50.398656 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:50.898256 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:50.898329 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:50.898694 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:50.898756 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:51.398426 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:51.398503 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:51.398913 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:51.898663 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:51.898743 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:51.899196 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:52.398565 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:52.398648 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:52.399111 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:52.898692 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:52.898773 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:52.899132 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:52.899190 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:53.398951 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:53.399065 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:53.399370 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:53.898173 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:53.898248 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:53.898623 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:54.398283 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:54.398370 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:54.398682 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:54.898239 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:54.898312 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:54.898573 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:55.398246 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:55.398320 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:55.398650 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:55.398707 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:55.898264 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:55.898343 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:55.898683 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:56.398258 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:56.398333 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:56.398586 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:56.898628 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:56.898703 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:56.899073 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:57.398945 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:57.399019 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:57.399371 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:57.399427 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:33:57.772952 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:33:57.834039 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:33:57.837641 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:33:57.837741 1633651 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 06:33:57.899083 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:57.899158 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:57.899422 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:58.398161 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:58.398242 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:58.398586 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:58.898310 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:58.898386 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:58.898742 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:59.398422 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:59.398493 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:59.398756 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:33:59.898258 1633651 type.go:168] "Request Body" body=""
	I1216 06:33:59.898331 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:33:59.898686 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:33:59.898740 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:00.327789 1633651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:34:00.398990 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:00.399071 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:00.399382 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:00.427909 1633651 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:34:00.431971 1633651 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:34:00.432103 1633651 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 06:34:00.437092 1633651 out.go:179] * Enabled addons: 
	I1216 06:34:00.440884 1633651 addons.go:530] duration metric: took 1m33.562192947s for enable addons: enabled=[]
	I1216 06:34:00.898292 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:00.898392 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:00.898707 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:01.398307 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:01.398389 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:01.398711 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:01.898244 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:01.898311 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:01.898577 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:02.398409 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:02.398488 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:02.398818 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:02.398876 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:02.898375 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:02.898452 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:02.898792 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:03.398249 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:03.398319 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:03.398577 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:03.898262 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:03.898340 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:03.898676 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:04.398263 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:04.398335 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:04.398654 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:04.898325 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:04.898400 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:04.898742 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:04.898801 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:05.398291 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:05.398382 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:05.398957 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:05.898686 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:05.898768 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:05.899122 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:06.398925 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:06.399010 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:06.399354 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:06.898972 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:06.899043 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:06.899401 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:06.899475 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:07.399211 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:07.399289 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:07.399665 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:07.898337 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:07.898421 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:07.898704 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:08.398265 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:08.398348 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:08.398682 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:08.898384 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:08.898460 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:08.898748 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:09.399015 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:09.399090 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:09.399360 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:09.399412 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:09.899197 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:09.899275 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:09.899628 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:10.398251 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:10.398324 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:10.398662 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:10.898348 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:10.898422 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:10.898716 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:11.398281 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:11.398362 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:11.398704 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:11.898290 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:11.898370 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:11.898687 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:11.898743 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:12.398541 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:12.398609 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:12.398881 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:12.898637 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:12.898723 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:12.899079 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:13.398865 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:13.398945 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:13.399273 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:13.899072 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:13.899151 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:13.899501 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:13.899561 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:14.398235 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:14.398316 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:14.398658 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:14.898363 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:14.898442 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:14.898813 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:15.398508 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:15.398583 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:15.398859 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:15.898280 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:15.898359 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:15.898662 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:16.398298 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:16.398373 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:16.398713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:16.398775 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:16.898203 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:16.898272 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:16.898528 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:17.398515 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:17.398598 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:17.398936 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:17.898289 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:17.898362 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:17.898713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:18.398422 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:18.398498 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:18.398771 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:18.398820 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:18.898251 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:18.898331 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:18.898653 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:19.398357 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:19.398446 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:19.398791 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:19.898510 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:19.898589 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:19.898872 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:20.398266 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:20.398359 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:20.398763 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:20.898254 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:20.898340 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:20.898695 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:20.898758 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:21.398239 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:21.398316 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:21.398590 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:21.898277 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:21.898350 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:21.898851 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:22.398811 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:22.398886 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:22.399204 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:22.898972 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:22.899048 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:22.899306 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:22.899351 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:23.399107 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:23.399181 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:23.399518 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:23.898252 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:23.898332 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:23.898659 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:24.398280 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:24.398364 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:24.398714 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:24.898279 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:24.898358 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:24.898719 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:25.398435 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:25.398518 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:25.398899 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:25.398964 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:25.898643 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:25.898718 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:25.898991 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:26.398257 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:26.398331 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:26.398659 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:26.898526 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:26.898612 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:26.899075 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:27.398249 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:27.398364 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:27.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:27.898275 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:27.898350 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:27.898713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:27.898798 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:28.398464 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:28.398539 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:28.398917 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:28.898624 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:28.898699 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:28.899014 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:29.398802 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:29.398878 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:29.399221 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:29.898995 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:29.899075 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:29.899431 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:29.899497 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:30.398215 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:30.398295 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:30.398549 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:30.898232 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:30.898309 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:30.898674 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:31.398411 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:31.398493 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:31.398835 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:31.898249 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:31.898315 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:31.898624 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:32.398296 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:32.398371 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:32.398696 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:32.398762 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:32.898447 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:32.898526 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:32.898844 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:33.398245 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:33.398318 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:33.398582 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:33.898259 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:33.898355 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:33.898652 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:34.398311 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:34.398386 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:34.398737 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:34.398791 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:34.898273 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:34.898347 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:34.898671 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:35.398244 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:35.398321 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:35.398665 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:35.898264 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:35.898348 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:35.898663 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:36.398349 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:36.398430 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:36.398756 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:36.898879 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:36.898962 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:36.899298 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:36.899363 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:37.398940 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:37.399018 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:37.399339 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:37.899128 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:37.899202 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:37.899475 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:38.398196 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:38.398276 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:38.398617 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:38.898346 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:38.898424 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:38.898788 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:39.398232 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:39.398304 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:39.398637 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:39.398705 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:39.898341 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:39.898419 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:39.898791 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:40.398499 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:40.398574 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:40.398963 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:40.898635 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:40.898719 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:40.899009 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:41.398866 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:41.398958 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:41.399281 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:41.399336 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:41.899108 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:41.899190 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:41.899541 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:42.398226 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:42.398314 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:42.398588 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:42.898199 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:42.898320 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:42.898686 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:43.398433 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:43.398510 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:43.398874 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:43.898570 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:43.898642 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:43.898913 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:43.898966 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:44.398296 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:44.398371 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:44.398701 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:44.898279 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:44.898356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:44.898693 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:45.398553 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:45.398755 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:45.399042 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:45.898881 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:45.898964 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:45.899318 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:45.899373 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:46.399167 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:46.399253 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:46.399612 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:46.898505 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:46.898584 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:46.898871 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:47.399034 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:47.399118 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:47.399524 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:47.898288 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:47.898367 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:47.898724 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:48.398399 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:48.398476 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:48.398811 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:48.398865 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:48.898261 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:48.898347 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:48.898763 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:49.398479 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:49.398552 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:49.398921 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:49.898219 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:49.898296 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:49.898632 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:50.398260 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:50.398336 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:50.398681 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:50.898398 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:50.898476 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:50.898813 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:50.898869 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:51.398266 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:51.398349 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:51.398666 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:51.898271 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:51.898358 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:51.898714 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:52.398270 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:52.398364 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:52.398695 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:52.898254 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:52.898333 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:52.898667 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:53.398349 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:53.398437 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:53.398787 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:53.398846 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:53.898285 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:53.898362 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:53.898735 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:54.398284 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:54.398352 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:54.398640 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:54.898273 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:54.898356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:54.898672 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:55.398276 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:55.398360 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:55.398754 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:55.898236 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:55.898309 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:55.898640 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:55.898694 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:56.398347 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:56.398429 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:56.398783 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:56.898669 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:56.898747 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:56.899097 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:57.399054 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:57.399128 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:57.399397 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:57.898166 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:57.898252 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:57.898582 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:58.398281 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:58.398356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:58.398693 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:34:58.398750 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:34:58.898237 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:58.898341 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:58.898734 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:59.398285 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:59.398370 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:59.398734 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:34:59.898309 1633651 type.go:168] "Request Body" body=""
	I1216 06:34:59.898391 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:34:59.898719 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:00.414820 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:00.414906 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:00.415201 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:00.415247 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:00.899080 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:00.899160 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:00.899488 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:01.398203 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:01.398286 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:01.398658 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:01.898381 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:01.898453 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:01.898741 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:02.398760 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:02.398842 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:02.399198 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:02.898874 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:02.898953 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:02.899310 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:02.899364 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:03.399127 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:03.399199 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:03.399477 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:03.898183 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:03.898263 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:03.898574 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:04.398285 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:04.398363 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:04.398713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:04.898409 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:04.898488 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:04.898770 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:05.398281 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:05.398358 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:05.398689 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:05.398747 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:05.898283 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:05.898360 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:05.898696 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:06.398276 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:06.398344 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:06.398628 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:06.898700 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:06.898789 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:06.899156 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:07.399150 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:07.399230 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:07.399559 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:07.399618 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:07.898272 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:07.898347 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:07.898685 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:08.398266 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:08.398340 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:08.398691 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:08.898270 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:08.898346 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:08.898712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:09.398403 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:09.398475 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:09.398741 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:09.898260 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:09.898336 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:09.898699 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:09.898756 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:10.398423 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:10.398500 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:10.398892 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:10.898626 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:10.898722 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:10.899103 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:11.398911 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:11.399006 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:11.399384 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:11.898151 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:11.898224 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:11.898573 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:12.398258 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:12.398328 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:12.398616 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:12.398695 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:12.898253 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:12.898331 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:12.898683 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:13.398383 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:13.398463 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:13.398838 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:13.898531 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:13.898612 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:13.898894 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:14.398278 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:14.398350 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:14.398713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:14.398765 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:14.898300 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:14.898380 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:14.898747 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:15.398438 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:15.398508 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:15.398778 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:15.898259 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:15.898333 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:15.898664 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:16.398285 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:16.398368 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:16.398712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:16.898532 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:16.898606 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:16.898878 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:16.898924 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:17.398589 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:17.398661 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:17.398959 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:17.898673 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:17.898753 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:17.899078 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:18.398855 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:18.398925 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:18.399198 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:18.898973 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:18.899048 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:18.899383 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:18.899438 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:19.399095 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:19.399174 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:19.399532 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:19.898245 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:19.898323 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:19.898607 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:20.398269 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:20.398343 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:20.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:20.898294 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:20.898374 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:20.898722 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:21.398418 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:21.398486 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:21.398764 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:21.398806 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:21.898283 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:21.898369 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:21.898714 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:22.398289 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:22.398365 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:22.398713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:22.898223 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:22.898294 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:22.898644 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:23.398337 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:23.398411 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:23.398738 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:23.898470 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:23.898573 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:23.898929 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:23.898986 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:24.398626 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:24.398696 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:24.398974 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:24.898302 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:24.898387 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:24.898855 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:25.398279 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:25.398361 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:25.398718 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:25.898396 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:25.898463 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:25.898752 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:26.398331 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:26.398440 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:26.398776 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:26.398836 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:26.898830 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:26.898904 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:26.899295 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:27.399107 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:27.399188 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:27.399497 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:27.898182 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:27.898260 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:27.898590 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:28.398292 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:28.398373 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:28.398733 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:28.898316 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:28.898394 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:28.898672 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:28.898717 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:29.398311 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:29.398408 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:29.398849 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:29.898311 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:29.898399 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:29.898772 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:30.398270 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:30.398340 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:30.398653 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:30.898259 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:30.898340 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:30.898667 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:31.398286 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:31.398365 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:31.398695 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:31.398752 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:31.898305 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:31.898393 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:31.898777 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:32.398810 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:32.398883 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:32.399201 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:32.899041 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:32.899121 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:32.899453 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:33.398148 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:33.398223 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:33.398492 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:33.898278 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:33.898353 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:33.898736 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:33.898787 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:34.398447 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:34.398528 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:34.398873 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:34.898221 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:34.898312 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:34.898605 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:35.398303 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:35.398382 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:35.398734 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:35.898472 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:35.898554 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:35.898882 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:35.898940 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:36.398373 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:36.398454 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:36.398749 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:36.898854 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:36.898926 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:36.899222 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:37.398175 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:37.398272 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:37.398626 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:37.898231 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:37.898296 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:37.898642 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:38.398279 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:38.398350 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:38.398708 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:38.398766 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:38.898476 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:38.898554 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:38.898890 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:39.398379 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:39.398485 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:39.398800 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:39.898316 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:39.898391 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:39.898769 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:40.398507 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:40.398604 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:40.398907 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:40.398953 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:40.898245 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:40.898335 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:40.898635 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:41.398325 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:41.398402 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:41.398863 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:41.898282 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:41.898365 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:41.898709 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:42.398319 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:42.398385 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:42.398670 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:42.898305 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:42.898377 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:42.898704 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:42.898763 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:43.398283 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:43.398356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:43.398701 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:43.898384 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:43.898461 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:43.898733 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:44.398250 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:44.398345 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:44.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:44.898255 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:44.898335 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:44.898712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:45.398244 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:45.398321 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:45.398663 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:45.398717 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:45.898312 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:45.898398 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:45.898773 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:46.398512 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:46.398593 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:46.398928 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:46.898755 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:46.898837 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:46.899103 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:47.399074 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:47.399155 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:47.399470 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:47.399520 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:47.898232 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:47.898309 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:47.898683 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:48.398447 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:48.398547 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:48.398895 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:48.898281 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:48.898356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:48.898703 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:49.398425 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:49.398500 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:49.398876 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:49.898573 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:49.898645 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:49.899024 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:49.899073 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:50.398808 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:50.398884 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:50.399215 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:50.898894 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:50.898974 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:50.899314 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:51.399073 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:51.399145 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:51.399405 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:51.899204 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:51.899286 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:51.899637 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:51.899692 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:52.398394 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:52.398470 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:52.398814 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:52.898245 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:52.898334 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:52.898628 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:53.398281 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:53.398365 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:53.398736 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:53.898467 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:53.898549 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:53.898914 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:54.398587 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:54.398670 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:54.398930 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:54.398971 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:54.898283 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:54.898362 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:54.898706 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:55.398429 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:55.398501 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:55.398821 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:55.898239 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:55.898308 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:55.898643 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:56.398282 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:56.398367 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:56.398726 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:56.898588 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:56.898668 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:56.899021 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:56.899088 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:57.398828 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:57.398910 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:57.399188 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:57.898996 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:57.899073 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:57.899382 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:58.399133 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:58.399235 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:58.399594 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:58.898219 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:58.898452 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:58.898861 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:35:59.398261 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:59.398348 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:59.398686 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:35:59.398752 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:35:59.898268 1633651 type.go:168] "Request Body" body=""
	I1216 06:35:59.898357 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:35:59.898715 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:00.399357 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:00.399435 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:00.399772 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:00.898475 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:00.898558 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:00.898912 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:01.398629 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:01.398704 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:01.399062 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:01.399123 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:01.898881 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:01.898960 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:01.899233 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:02.399234 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:02.399313 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:02.399704 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:02.898296 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:02.898382 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:02.898715 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:03.398263 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:03.398346 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:03.398641 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:03.898276 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:03.898359 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:03.898695 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:03.898751 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:04.398291 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:04.398413 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:04.398743 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:04.898440 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:04.898518 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:04.898790 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:05.398493 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:05.398570 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:05.398895 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:05.898635 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:05.898712 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:05.899049 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:05.899102 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:06.398845 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:06.398927 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:06.399275 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:06.899212 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:06.899287 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:06.899619 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:07.398278 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:07.398388 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:07.398739 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:07.898423 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:07.898501 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:07.898769 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:08.398271 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:08.398361 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:08.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:08.398759 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:08.898430 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:08.898507 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:08.898855 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:09.398214 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:09.398290 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:09.398601 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:09.898266 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:09.898350 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:09.898705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:10.398295 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:10.398377 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:10.398707 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:10.898349 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:10.898425 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:10.898702 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:10.898757 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:11.398292 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:11.398366 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:11.398705 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:11.898435 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:11.898509 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:11.898839 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:12.398738 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:12.398804 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:12.399069 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:12.898825 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:12.898900 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:12.899217 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:12.899278 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:13.399064 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:13.399138 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:13.399479 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:13.898174 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:13.898254 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:13.898539 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:14.398296 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:14.398371 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:14.398712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:14.898437 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:14.898518 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:14.898877 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:15.398539 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:15.398617 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:15.398894 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:15.398947 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:15.898294 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:15.898402 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:15.898784 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:16.398330 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:16.398408 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:16.398731 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:16.898535 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:16.898609 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:16.898886 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:17.398882 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:17.398955 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:17.399291 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:17.399351 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:17.899139 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:17.899220 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:17.899551 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:18.398232 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:18.398362 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:18.398620 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:18.898277 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:18.898354 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:18.898649 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:19.398247 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:19.398346 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:19.398683 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:19.898387 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:19.898473 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:19.898758 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:19.898804 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:20.398296 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:20.398369 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:20.398689 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:20.898334 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:20.898415 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:20.898762 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:21.398456 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:21.398532 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:21.398795 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:21.898309 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:21.898383 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:21.898723 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:22.398748 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:22.398819 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:22.399287 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:22.399332 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:22.899045 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:22.899124 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:22.899438 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:23.398179 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:23.398299 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:23.398688 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:23.898298 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:23.898382 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:23.898729 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:24.398222 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:24.398296 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:24.398629 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:24.898320 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:24.898394 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:24.898747 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:24.898810 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:25.398296 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:25.398380 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:25.398720 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:25.898403 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:25.898472 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:25.898736 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:26.398281 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:26.398355 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:26.398676 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:26.898649 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:26.898727 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:26.899069 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:26.899125 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:27.398556 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:27.398654 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:27.398964 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:27.898756 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:27.898845 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:27.899194 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:28.398978 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:28.399057 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:28.399387 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:28.899171 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:28.899242 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:28.899511 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:28.899553 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:29.398265 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:29.398345 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:29.398698 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:29.898282 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:29.898357 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:29.898723 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:30.398293 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:30.398372 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:30.398656 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:30.898379 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:30.898467 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:30.898858 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:31.398431 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:31.398506 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:31.398844 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:31.398900 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:31.898545 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:31.898622 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:31.898916 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:32.398834 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:32.398911 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:32.399252 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:32.899021 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:32.899098 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:32.899424 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:33.398133 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:33.398202 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:33.398473 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:33.898147 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:33.898235 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:33.898584 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:33.898642 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:34.398163 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:34.398242 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:34.398591 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:34.898191 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:34.898275 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:34.898568 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:35.398271 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:35.398363 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:35.398707 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:35.898320 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:35.898407 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:35.898755 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:35.898810 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:36.398446 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:36.398521 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:36.398786 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:36.898729 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:36.898812 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:36.899129 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:37.399112 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:37.399185 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:37.399511 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:37.898225 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:37.898304 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:37.898568 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:38.398267 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:38.398343 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:38.398710 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:38.398764 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:38.898279 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:38.898353 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:38.898729 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:39.398240 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:39.398351 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:39.398667 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:39.898287 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:39.898369 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:39.898673 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:40.398360 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:40.398435 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:40.398766 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:40.398819 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:40.898241 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:40.898314 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:40.898637 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:41.398298 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:41.398376 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:41.398683 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:41.898412 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:41.898487 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:41.898821 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:42.398242 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:42.398318 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:42.398580 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:42.898280 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:42.898355 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:42.898692 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:42.898748 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:43.398416 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:43.398491 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:43.398846 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:43.898235 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:43.898329 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:43.898615 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:44.398291 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:44.398366 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:44.398722 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:44.898411 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:44.898483 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:44.898775 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:44.898824 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:45.398248 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:45.398345 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:45.398675 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:45.898365 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:45.898459 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:45.898837 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:46.398280 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:46.398361 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:46.398716 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:46.898502 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:46.898576 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:46.898840 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:46.898879 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:47.398781 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:47.398852 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:47.399176 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:47.898950 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:47.899024 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:47.899371 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:48.399121 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:48.399194 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:48.399456 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:48.899245 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:48.899322 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:48.899641 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:48.899693 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:49.398288 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:49.398370 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:49.398748 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:49.898250 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:49.898327 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:49.898652 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:50.398271 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:50.398347 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:50.398703 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:50.898421 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:50.898500 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:50.898849 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:51.398536 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:51.398624 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:51.398900 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:51.398944 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:51.898273 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:51.898353 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:51.898689 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:52.398314 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:52.398399 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:52.398737 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:52.898253 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:52.898349 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:52.898676 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:53.398281 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:53.398352 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:53.398708 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:53.898301 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:53.898380 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:53.898717 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:53.898780 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:54.398263 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:54.398358 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:54.398690 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:54.898290 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:54.898368 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:54.898745 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:55.398460 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:55.398541 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:55.398872 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:55.898241 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:55.898317 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:55.898573 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:56.398264 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:56.398341 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:56.398665 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:56.398721 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:56.898737 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:56.898816 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:56.899137 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:57.399000 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:57.399068 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:57.399335 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:57.899058 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:57.899134 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:57.899469 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:58.398223 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:58.398317 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:58.398690 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:36:58.398749 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:36:58.898385 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:58.898460 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:58.898722 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:59.398260 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:59.398337 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:59.398712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:36:59.898268 1633651 type.go:168] "Request Body" body=""
	I1216 06:36:59.898343 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:36:59.898667 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:00.398403 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:00.398481 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:00.398778 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:00.398824 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:00.898298 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:00.898373 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:00.898697 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:01.398432 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:01.398511 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:01.398865 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:01.898261 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:01.898333 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:01.898600 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:02.398363 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:02.398458 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:02.398848 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:02.398903 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:02.898598 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:02.898677 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:02.899033 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:03.398801 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:03.398882 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:03.399146 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:03.898939 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:03.899014 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:03.899351 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:04.399028 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:04.399109 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:04.399429 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:04.399479 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:04.898171 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:04.898241 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:04.898523 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:05.398299 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:05.398375 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:05.398691 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:05.898283 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:05.898372 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:05.898673 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:06.398257 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:06.398336 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:06.398612 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:06.898577 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:06.898653 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:06.899006 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:06.899062 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:07.398886 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:07.398973 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:07.399304 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:07.899089 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:07.899159 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:07.899439 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:08.399244 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:08.399316 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:08.399642 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:08.898339 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:08.898425 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:08.898755 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:09.398430 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:09.398498 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:09.398754 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:09.398796 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:09.898279 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:09.898378 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:09.898704 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:10.398393 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:10.398469 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:10.398815 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:10.898372 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:10.898442 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:10.898709 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:11.398311 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:11.398389 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:11.398776 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:11.398848 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:11.898377 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:11.898455 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:11.898804 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:12.398256 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:12.398324 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:12.398587 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:12.898265 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:12.898339 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:12.898691 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:13.398375 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:13.398449 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:13.398799 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:13.898228 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:13.898308 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:13.898581 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:13.898622 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:14.398260 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:14.398340 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:14.398658 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:14.898262 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:14.898344 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:14.898707 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:15.398332 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:15.398408 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:15.398675 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:15.898289 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:15.898368 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:15.898652 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:15.898699 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:16.398290 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:16.398365 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:16.398727 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:16.898702 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:16.898784 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:16.899056 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:17.398983 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:17.399055 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:17.399412 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:17.899241 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:17.899319 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:17.899615 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:17.899667 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:18.398328 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:18.398395 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:18.398676 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:18.898311 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:18.898389 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:18.898756 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:19.398447 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:19.398552 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:19.398855 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:19.898524 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:19.898598 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:19.898881 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:20.398260 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:20.398339 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:20.398672 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:20.398727 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:20.898288 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:20.898361 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:20.898685 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:21.398238 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:21.398309 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:21.398582 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:21.898316 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:21.898391 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:21.898740 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:22.398286 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:22.398366 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:22.398717 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:22.398773 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:22.898431 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:22.898499 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:22.898765 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:23.398284 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:23.398368 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:23.398729 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:23.898447 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:23.898524 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:23.898868 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:24.398560 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:24.398637 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:24.398927 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:24.398969 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:24.898283 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:24.898357 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:24.898696 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:25.398285 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:25.398368 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:25.398721 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:25.898222 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:25.898307 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:25.898627 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:26.398280 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:26.398362 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:26.398712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:26.898725 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:26.898800 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:26.899142 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:26.899196 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:27.398976 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:27.399052 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:27.399314 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:27.899092 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:27.899164 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:27.899471 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:28.398223 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:28.398299 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:28.398602 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:28.898256 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:28.898325 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:28.898655 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:29.398287 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:29.398360 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:29.398693 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:29.398750 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:29.898408 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:29.898505 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:29.898906 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:30.398225 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:30.398302 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:30.398631 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:30.898286 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:30.898375 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:30.898730 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:31.398439 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:31.398517 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:31.398856 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:31.398911 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:31.898555 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:31.898623 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:31.898889 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:32.398937 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:32.399013 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:32.399352 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:32.899143 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:32.899220 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:32.899571 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:33.398155 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:33.398227 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:33.398484 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:33.898182 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:33.898255 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:33.898595 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:33.898651 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:34.398324 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:34.398396 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:34.398738 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:34.898420 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:34.898491 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:34.898769 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:35.398292 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:35.398369 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:35.398658 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:35.898356 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:35.898432 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:35.898728 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:35.898819 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:36.398478 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:36.398549 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:36.398814 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:36.898859 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:36.898933 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:36.899273 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:37.399136 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:37.399213 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:37.399567 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:37.898258 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:37.898329 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:37.898588 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:38.398300 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:38.398379 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:38.398666 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:38.398713 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:38.898281 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:38.898356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:38.898708 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:39.398215 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:39.398283 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:39.398608 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:39.898292 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:39.898365 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:39.898735 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:40.398290 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:40.398419 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:40.398713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:40.398761 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:40.898223 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:40.898291 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:40.898631 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:41.398327 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:41.398405 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:41.398732 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:41.898307 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:41.898393 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:41.898757 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:42.398724 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:42.398796 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:42.399059 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:42.399111 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:42.898855 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:42.898936 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:42.899284 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:43.399100 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:43.399176 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:43.399519 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:43.898212 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:43.898287 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:43.898548 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:44.398253 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:44.398333 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:44.398697 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:44.898401 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:44.898475 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:44.898804 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:44.898860 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:45.398241 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:45.398315 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:45.398573 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:45.898329 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:45.898404 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:45.898750 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:46.398288 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:46.398359 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:46.398673 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:46.898698 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:46.898768 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:46.899039 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:46.899080 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:47.398977 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:47.399049 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:47.399400 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:47.899044 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:47.899122 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:47.899468 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:48.398202 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:48.398275 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:48.398540 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:48.898231 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:48.898304 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:48.898650 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:49.398232 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:49.398318 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:49.398653 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:49.398711 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:49.898340 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:49.898415 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:49.898682 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:50.398255 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:50.398337 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:50.398634 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:50.898338 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:50.898429 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:50.898764 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:51.398436 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:51.398506 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:51.398820 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:51.398875 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:51.898249 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:51.898343 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:51.898647 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:52.398329 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:52.398423 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:52.398786 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:52.898247 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:52.898320 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:52.898634 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:53.398288 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:53.398360 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:53.398691 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:53.898307 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:53.898414 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:53.898758 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:53.898813 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:54.398461 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:54.398534 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:54.398794 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:54.898301 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:54.898376 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:54.898766 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:55.398305 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:55.398390 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:55.398708 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:55.898252 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:55.898321 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:55.898601 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:56.398276 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:56.398353 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:56.398704 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:56.398769 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:56.898725 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:56.898806 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:56.899207 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:57.398957 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:57.399027 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:57.399310 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:57.899115 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:57.899188 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:57.899518 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:58.398225 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:58.398296 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:58.398611 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:58.898289 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:58.898363 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:58.898624 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:37:58.898670 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:37:59.398281 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:59.398361 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:59.398712 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:37:59.898427 1633651 type.go:168] "Request Body" body=""
	I1216 06:37:59.898517 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:37:59.898807 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:00.398278 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:00.398364 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:00.399475 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 06:38:00.898197 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:00.898269 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:00.898604 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:01.398343 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:01.398423 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:01.398732 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:01.398781 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:01.898309 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:01.898387 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:01.898662 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:02.398274 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:02.398348 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:02.398666 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:02.898354 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:02.898429 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:02.898739 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:03.398239 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:03.398307 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:03.398615 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:03.898236 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:03.898311 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:03.898646 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:03.898700 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:04.398254 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:04.398336 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:04.398687 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:04.898364 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:04.898443 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:04.898706 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:05.398258 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:05.398338 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:05.398679 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:05.898384 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:05.898464 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:05.898794 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:05.898848 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:06.398478 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:06.398546 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:06.398819 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:06.898821 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:06.898898 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:06.899244 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:07.399095 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:07.399177 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:07.399526 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:07.898233 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:07.898305 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:07.898583 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:08.398273 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:08.398355 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:08.398689 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:08.398747 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:08.898439 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:08.898512 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:08.898861 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:09.398318 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:09.398389 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:09.398662 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:09.898282 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:09.898371 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:09.898697 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:10.398289 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:10.398372 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:10.398704 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:10.898271 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:10.898351 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:10.898646 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:10.898697 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:11.398274 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:11.398356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:11.398699 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:11.898267 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:11.898346 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:11.898692 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:12.398257 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:12.398345 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:12.398686 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:12.898310 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:12.898387 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:12.898713 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:12.898765 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:13.398455 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:13.398532 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:13.398909 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:13.898601 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:13.898682 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:13.899003 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:14.398291 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:14.398366 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:14.398694 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:14.898453 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:14.898549 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:14.898911 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:14.898969 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:15.398256 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:15.398338 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:15.398607 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:15.898340 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:15.898416 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:15.898765 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:16.398312 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:16.398390 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:16.398677 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:16.898563 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:16.898635 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:16.898893 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:17.398825 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:17.398897 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:17.399203 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:17.399251 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:17.899015 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:17.899092 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:17.899429 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:18.399192 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:18.399272 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:18.399543 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:18.898305 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:18.898380 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:18.898701 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:19.398329 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:19.398405 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:19.398708 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:19.898230 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:19.898303 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:19.898634 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:19.898691 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:20.398293 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:20.398368 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:20.398701 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:20.898295 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:20.898370 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:20.898697 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:21.398453 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:21.398559 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:21.398856 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:21.898276 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:21.898357 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:21.898729 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:21.898782 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:22.398291 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:22.398376 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:22.398740 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:22.898287 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:22.898360 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:22.898617 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:23.398307 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:23.398396 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:23.398750 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:23.898299 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:23.898375 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:23.898725 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:24.398289 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:24.398356 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:24.398635 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:24.398676 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:24.898264 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:24.898338 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:24.898687 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:25.398443 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:25.398523 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:25.398874 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:25.898588 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:25.898660 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:25.898920 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:26.398605 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:26.398677 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:26.399010 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 06:38:26.399063 1633651 node_ready.go:55] error getting node "functional-364120" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-364120": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 06:38:26.898789 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:26.898863 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:26.899190 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:27.400218 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:27.400306 1633651 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-364120" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 06:38:27.400637 1633651 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 06:38:27.898246 1633651 type.go:168] "Request Body" body=""
	I1216 06:38:27.898312 1633651 node_ready.go:38] duration metric: took 6m0.000267561s for node "functional-364120" to be "Ready" ...
	I1216 06:38:27.901509 1633651 out.go:203] 
	W1216 06:38:27.904340 1633651 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1216 06:38:27.904359 1633651 out.go:285] * 
	W1216 06:38:27.906499 1633651 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 06:38:27.909191 1633651 out.go:203] 
	
	
	==> CRI-O <==
	Dec 16 06:38:36 functional-364120 crio[5357]: time="2025-12-16T06:38:36.548206424Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=37ae7426-45c2-45bc-a7f9-b14a371314ac name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:37 functional-364120 crio[5357]: time="2025-12-16T06:38:37.605426434Z" level=info msg="Checking image status: minikube-local-cache-test:functional-364120" id=13eb0b8e-5049-44e4-87c5-72abd7d1dca5 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:37 functional-364120 crio[5357]: time="2025-12-16T06:38:37.605630563Z" level=info msg="Resolving \"minikube-local-cache-test\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 16 06:38:37 functional-364120 crio[5357]: time="2025-12-16T06:38:37.605685439Z" level=info msg="Image minikube-local-cache-test:functional-364120 not found" id=13eb0b8e-5049-44e4-87c5-72abd7d1dca5 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:37 functional-364120 crio[5357]: time="2025-12-16T06:38:37.605776106Z" level=info msg="Neither image nor artfiact minikube-local-cache-test:functional-364120 found" id=13eb0b8e-5049-44e4-87c5-72abd7d1dca5 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:37 functional-364120 crio[5357]: time="2025-12-16T06:38:37.629361141Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-364120" id=b996cd7c-bf1b-4d21-aa33-2c27e8f7fc09 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:37 functional-364120 crio[5357]: time="2025-12-16T06:38:37.629523825Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-364120 not found" id=b996cd7c-bf1b-4d21-aa33-2c27e8f7fc09 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:37 functional-364120 crio[5357]: time="2025-12-16T06:38:37.629576436Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-364120 found" id=b996cd7c-bf1b-4d21-aa33-2c27e8f7fc09 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:37 functional-364120 crio[5357]: time="2025-12-16T06:38:37.653340032Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-364120" id=767a7891-029a-4860-8349-88781764a026 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:37 functional-364120 crio[5357]: time="2025-12-16T06:38:37.653499417Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-364120 not found" id=767a7891-029a-4860-8349-88781764a026 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:37 functional-364120 crio[5357]: time="2025-12-16T06:38:37.653554843Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-364120 found" id=767a7891-029a-4860-8349-88781764a026 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:38 functional-364120 crio[5357]: time="2025-12-16T06:38:38.620908719Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=1045cfc3-a374-4471-9bcc-7fb60eb5cce5 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:38 functional-364120 crio[5357]: time="2025-12-16T06:38:38.970770396Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=9246ae91-cfa3-4179-8016-7029975f27bd name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:38 functional-364120 crio[5357]: time="2025-12-16T06:38:38.97091891Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=9246ae91-cfa3-4179-8016-7029975f27bd name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:38 functional-364120 crio[5357]: time="2025-12-16T06:38:38.970958188Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=9246ae91-cfa3-4179-8016-7029975f27bd name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:39 functional-364120 crio[5357]: time="2025-12-16T06:38:39.542340366Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=15620261-c883-4179-8c3d-551c5846372d name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:39 functional-364120 crio[5357]: time="2025-12-16T06:38:39.542645549Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=15620261-c883-4179-8c3d-551c5846372d name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:39 functional-364120 crio[5357]: time="2025-12-16T06:38:39.542771909Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=15620261-c883-4179-8c3d-551c5846372d name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:39 functional-364120 crio[5357]: time="2025-12-16T06:38:39.594827473Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=66d90bf1-74af-4ebc-8ecb-c345e0cabdf9 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:39 functional-364120 crio[5357]: time="2025-12-16T06:38:39.594957254Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=66d90bf1-74af-4ebc-8ecb-c345e0cabdf9 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:39 functional-364120 crio[5357]: time="2025-12-16T06:38:39.594997041Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=66d90bf1-74af-4ebc-8ecb-c345e0cabdf9 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:39 functional-364120 crio[5357]: time="2025-12-16T06:38:39.621225571Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=8c8c2628-faae-47f6-82e4-f68829c2ead6 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:39 functional-364120 crio[5357]: time="2025-12-16T06:38:39.621384145Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=8c8c2628-faae-47f6-82e4-f68829c2ead6 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:39 functional-364120 crio[5357]: time="2025-12-16T06:38:39.621433507Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=8c8c2628-faae-47f6-82e4-f68829c2ead6 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:38:40 functional-364120 crio[5357]: time="2025-12-16T06:38:40.217809837Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=4602658b-593b-43b3-a28f-0dcd69a07939 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:38:44.171818    9475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:38:44.172333    9475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:38:44.174319    9475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:38:44.174946    9475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:38:44.176726    9475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 06:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec16 06:13] overlayfs: idmapped layers are currently not supported
	[Dec16 06:19] overlayfs: idmapped layers are currently not supported
	[Dec16 06:20] overlayfs: idmapped layers are currently not supported
	[Dec16 06:38] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 06:38:44 up  9:21,  0 user,  load average: 0.45, 0.33, 0.79
	Linux functional-364120 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 06:38:41 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:38:42 functional-364120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1155.
	Dec 16 06:38:42 functional-364120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:38:42 functional-364120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:38:42 functional-364120 kubelet[9352]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:38:42 functional-364120 kubelet[9352]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:38:42 functional-364120 kubelet[9352]: E1216 06:38:42.461706    9352 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:38:42 functional-364120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:38:42 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:38:43 functional-364120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1156.
	Dec 16 06:38:43 functional-364120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:38:43 functional-364120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:38:43 functional-364120 kubelet[9384]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:38:43 functional-364120 kubelet[9384]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:38:43 functional-364120 kubelet[9384]: E1216 06:38:43.214882    9384 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:38:43 functional-364120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:38:43 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:38:43 functional-364120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1157.
	Dec 16 06:38:43 functional-364120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:38:43 functional-364120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:38:43 functional-364120 kubelet[9422]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:38:43 functional-364120 kubelet[9422]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:38:43 functional-364120 kubelet[9422]: E1216 06:38:43.958476    9422 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:38:43 functional-364120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:38:43 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-364120 -n functional-364120
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-364120 -n functional-364120: exit status 2 (453.533719ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-364120" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (735.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-364120 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1216 06:41:06.821363 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:43:08.325457 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-487532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:44:31.396605 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-487532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:46:06.820705 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:48:08.325992 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-487532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-364120 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m13.02678711s)

                                                
                                                
-- stdout --
	* [functional-364120] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22141
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-364120" primary control-plane node in "functional-364120" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001159637s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000152952s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000152952s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-364120 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m13.028108852s for "functional-364120" cluster.
I1216 06:50:58.354496 1599255 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-364120
helpers_test.go:244: (dbg) docker inspect functional-364120:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf",
	        "Created": "2025-12-16T06:24:05.281524036Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1628059,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T06:24:05.346294886Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/hostname",
	        "HostsPath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/hosts",
	        "LogPath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf-json.log",
	        "Name": "/functional-364120",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-364120:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-364120",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf",
	                "LowerDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752-init/diff:/var/lib/docker/overlay2/bf9e5e3f04a34ae52d17b5e81aeacb3854428b2bda7b4fcb7e1d86558db759ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752/merged",
	                "UpperDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752/diff",
	                "WorkDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-364120",
	                "Source": "/var/lib/docker/volumes/functional-364120/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-364120",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-364120",
	                "name.minikube.sigs.k8s.io": "functional-364120",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ca8e444af5ea4dc220aae407b23205e89ee2c7bfaf0d7da28c0fa8a6e9438a0b",
	            "SandboxKey": "/var/run/docker/netns/ca8e444af5ea",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34260"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34261"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34264"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34262"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34263"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-364120": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:28:ec:c3:f0:f5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a6847428577f52c75d7f6ab7a92b3395c1204da1608971d5af98d3898a2210da",
	                    "EndpointID": "e579fd8a0ba117da836073d37b7f617933568bedfc3fb52e056b4772aaddecbf",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-364120",
	                        "8e0dcfb5d015"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-364120 -n functional-364120
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-364120 -n functional-364120: exit status 2 (297.19519ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-487532 image ls --format json --alsologtostderr                                                                                        │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image          │ functional-487532 image ls --format table --alsologtostderr                                                                                       │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ update-context │ functional-487532 update-context --alsologtostderr -v=2                                                                                           │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ update-context │ functional-487532 update-context --alsologtostderr -v=2                                                                                           │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ update-context │ functional-487532 update-context --alsologtostderr -v=2                                                                                           │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image          │ functional-487532 image ls                                                                                                                        │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ delete         │ -p functional-487532                                                                                                                              │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:24 UTC │
	│ start          │ -p functional-364120 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:24 UTC │                     │
	│ start          │ -p functional-364120 --alsologtostderr -v=8                                                                                                       │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:32 UTC │                     │
	│ cache          │ functional-364120 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ cache          │ functional-364120 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ cache          │ functional-364120 cache add registry.k8s.io/pause:latest                                                                                          │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ cache          │ functional-364120 cache add minikube-local-cache-test:functional-364120                                                                           │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ cache          │ functional-364120 cache delete minikube-local-cache-test:functional-364120                                                                        │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ cache          │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ ssh            │ functional-364120 ssh sudo crictl images                                                                                                          │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ ssh            │ functional-364120 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ ssh            │ functional-364120 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │                     │
	│ cache          │ functional-364120 cache reload                                                                                                                    │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ ssh            │ functional-364120 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ kubectl        │ functional-364120 kubectl -- --context functional-364120 get pods                                                                                 │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │                     │
	│ start          │ -p functional-364120 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                          │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 06:38:45
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 06:38:45.382114 1639474 out.go:360] Setting OutFile to fd 1 ...
	I1216 06:38:45.382275 1639474 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:38:45.382279 1639474 out.go:374] Setting ErrFile to fd 2...
	I1216 06:38:45.382283 1639474 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:38:45.382644 1639474 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 06:38:45.383081 1639474 out.go:368] Setting JSON to false
	I1216 06:38:45.383946 1639474 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":33677,"bootTime":1765833449,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1216 06:38:45.384032 1639474 start.go:143] virtualization:  
	I1216 06:38:45.387610 1639474 out.go:179] * [functional-364120] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 06:38:45.391422 1639474 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 06:38:45.391485 1639474 notify.go:221] Checking for updates...
	I1216 06:38:45.397275 1639474 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 06:38:45.400538 1639474 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:38:45.403348 1639474 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube
	I1216 06:38:45.406183 1639474 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 06:38:45.410019 1639474 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 06:38:45.413394 1639474 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 06:38:45.413485 1639474 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 06:38:45.451796 1639474 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 06:38:45.451901 1639474 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:38:45.529304 1639474 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-16 06:38:45.519310041 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 06:38:45.529400 1639474 docker.go:319] overlay module found
	I1216 06:38:45.532456 1639474 out.go:179] * Using the docker driver based on existing profile
	I1216 06:38:45.535342 1639474 start.go:309] selected driver: docker
	I1216 06:38:45.535352 1639474 start.go:927] validating driver "docker" against &{Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:38:45.535432 1639474 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 06:38:45.535555 1639474 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:38:45.605792 1639474 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-16 06:38:45.594564391 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 06:38:45.606168 1639474 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 06:38:45.606189 1639474 cni.go:84] Creating CNI manager for ""
	I1216 06:38:45.606237 1639474 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 06:38:45.606285 1639474 start.go:353] cluster config:
	{Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:38:45.611347 1639474 out.go:179] * Starting "functional-364120" primary control-plane node in "functional-364120" cluster
	I1216 06:38:45.614388 1639474 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 06:38:45.617318 1639474 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 06:38:45.620204 1639474 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 06:38:45.620247 1639474 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1216 06:38:45.620256 1639474 cache.go:65] Caching tarball of preloaded images
	I1216 06:38:45.620287 1639474 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 06:38:45.620351 1639474 preload.go:238] Found /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 06:38:45.620360 1639474 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1216 06:38:45.620487 1639474 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/config.json ...
	I1216 06:38:45.639567 1639474 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 06:38:45.639578 1639474 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 06:38:45.639591 1639474 cache.go:243] Successfully downloaded all kic artifacts
	I1216 06:38:45.639630 1639474 start.go:360] acquireMachinesLock for functional-364120: {Name:mkbf042218fd4d1baa11f8b1e4a71170f4ad9912 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:38:45.639687 1639474 start.go:364] duration metric: took 37.908µs to acquireMachinesLock for "functional-364120"
	I1216 06:38:45.639706 1639474 start.go:96] Skipping create...Using existing machine configuration
	I1216 06:38:45.639711 1639474 fix.go:54] fixHost starting: 
	I1216 06:38:45.639996 1639474 cli_runner.go:164] Run: docker container inspect functional-364120 --format={{.State.Status}}
	I1216 06:38:45.656952 1639474 fix.go:112] recreateIfNeeded on functional-364120: state=Running err=<nil>
	W1216 06:38:45.656970 1639474 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 06:38:45.660116 1639474 out.go:252] * Updating the running docker "functional-364120" container ...
	I1216 06:38:45.660138 1639474 machine.go:94] provisionDockerMachine start ...
	I1216 06:38:45.660218 1639474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:38:45.677387 1639474 main.go:143] libmachine: Using SSH client type: native
	I1216 06:38:45.677705 1639474 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1216 06:38:45.677711 1639474 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 06:38:45.812247 1639474 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-364120
	
	I1216 06:38:45.812262 1639474 ubuntu.go:182] provisioning hostname "functional-364120"
	I1216 06:38:45.812325 1639474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:38:45.830038 1639474 main.go:143] libmachine: Using SSH client type: native
	I1216 06:38:45.830333 1639474 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1216 06:38:45.830342 1639474 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-364120 && echo "functional-364120" | sudo tee /etc/hostname
	I1216 06:38:45.969440 1639474 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-364120
	
	I1216 06:38:45.969519 1639474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:38:45.987438 1639474 main.go:143] libmachine: Using SSH client type: native
	I1216 06:38:45.987738 1639474 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1216 06:38:45.987751 1639474 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-364120' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-364120/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-364120' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 06:38:46.120750 1639474 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 06:38:46.120766 1639474 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-1596013/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-1596013/.minikube}
	I1216 06:38:46.120795 1639474 ubuntu.go:190] setting up certificates
	I1216 06:38:46.120811 1639474 provision.go:84] configureAuth start
	I1216 06:38:46.120880 1639474 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-364120
	I1216 06:38:46.139450 1639474 provision.go:143] copyHostCerts
	I1216 06:38:46.139518 1639474 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem, removing ...
	I1216 06:38:46.139535 1639474 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 06:38:46.139611 1639474 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem (1078 bytes)
	I1216 06:38:46.139701 1639474 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem, removing ...
	I1216 06:38:46.139705 1639474 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 06:38:46.139730 1639474 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem (1123 bytes)
	I1216 06:38:46.139777 1639474 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem, removing ...
	I1216 06:38:46.139780 1639474 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 06:38:46.139802 1639474 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem (1675 bytes)
	I1216 06:38:46.139846 1639474 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem org=jenkins.functional-364120 san=[127.0.0.1 192.168.49.2 functional-364120 localhost minikube]
	I1216 06:38:46.453267 1639474 provision.go:177] copyRemoteCerts
	I1216 06:38:46.453323 1639474 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 06:38:46.453367 1639474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:38:46.472384 1639474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:38:46.568304 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 06:38:46.585458 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 06:38:46.602822 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 06:38:46.619947 1639474 provision.go:87] duration metric: took 499.122604ms to configureAuth
	I1216 06:38:46.619964 1639474 ubuntu.go:206] setting minikube options for container-runtime
	I1216 06:38:46.620160 1639474 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 06:38:46.620252 1639474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:38:46.637350 1639474 main.go:143] libmachine: Using SSH client type: native
	I1216 06:38:46.637660 1639474 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1216 06:38:46.637671 1639474 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 06:38:46.957629 1639474 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 06:38:46.957641 1639474 machine.go:97] duration metric: took 1.297496853s to provisionDockerMachine
	I1216 06:38:46.957652 1639474 start.go:293] postStartSetup for "functional-364120" (driver="docker")
	I1216 06:38:46.957670 1639474 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 06:38:46.957741 1639474 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 06:38:46.957790 1639474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:38:46.978202 1639474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:38:47.080335 1639474 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 06:38:47.083578 1639474 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 06:38:47.083597 1639474 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 06:38:47.083607 1639474 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/addons for local assets ...
	I1216 06:38:47.083662 1639474 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/files for local assets ...
	I1216 06:38:47.083735 1639474 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> 15992552.pem in /etc/ssl/certs
	I1216 06:38:47.083808 1639474 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/test/nested/copy/1599255/hosts -> hosts in /etc/test/nested/copy/1599255
	I1216 06:38:47.083855 1639474 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1599255
	I1216 06:38:47.091346 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 06:38:47.108874 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/test/nested/copy/1599255/hosts --> /etc/test/nested/copy/1599255/hosts (40 bytes)
	I1216 06:38:47.126774 1639474 start.go:296] duration metric: took 169.103296ms for postStartSetup
	I1216 06:38:47.126870 1639474 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 06:38:47.126918 1639474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:38:47.145224 1639474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:38:47.237421 1639474 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 06:38:47.242526 1639474 fix.go:56] duration metric: took 1.602809118s for fixHost
	I1216 06:38:47.242542 1639474 start.go:83] releasing machines lock for "functional-364120", held for 1.602847814s
	I1216 06:38:47.242635 1639474 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-364120
	I1216 06:38:47.260121 1639474 ssh_runner.go:195] Run: cat /version.json
	I1216 06:38:47.260167 1639474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:38:47.260174 1639474 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 06:38:47.260224 1639474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:38:47.277503 1639474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:38:47.283903 1639474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:38:47.464356 1639474 ssh_runner.go:195] Run: systemctl --version
	I1216 06:38:47.476410 1639474 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 06:38:47.514461 1639474 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 06:38:47.518820 1639474 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 06:38:47.518882 1639474 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 06:38:47.526809 1639474 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 06:38:47.526823 1639474 start.go:496] detecting cgroup driver to use...
	I1216 06:38:47.526855 1639474 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:38:47.526909 1639474 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 06:38:47.542915 1639474 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:38:47.556456 1639474 docker.go:218] disabling cri-docker service (if available) ...
	I1216 06:38:47.556532 1639474 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 06:38:47.572387 1639474 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 06:38:47.585623 1639474 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 06:38:47.693830 1639474 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 06:38:47.836192 1639474 docker.go:234] disabling docker service ...
	I1216 06:38:47.836253 1639474 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 06:38:47.851681 1639474 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 06:38:47.865315 1639474 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 06:38:47.985223 1639474 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 06:38:48.104393 1639474 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 06:38:48.118661 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 06:38:48.136892 1639474 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 06:38:48.136961 1639474 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:38:48.147508 1639474 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 06:38:48.147579 1639474 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:38:48.156495 1639474 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:38:48.165780 1639474 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:38:48.174392 1639474 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 06:38:48.182433 1639474 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:38:48.191004 1639474 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:38:48.198914 1639474 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:38:48.207365 1639474 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 06:38:48.214548 1639474 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 06:38:48.221727 1639474 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:38:48.346771 1639474 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 06:38:48.562751 1639474 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 06:38:48.562822 1639474 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 06:38:48.566564 1639474 start.go:564] Will wait 60s for crictl version
	I1216 06:38:48.566626 1639474 ssh_runner.go:195] Run: which crictl
	I1216 06:38:48.570268 1639474 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 06:38:48.600286 1639474 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 06:38:48.600360 1639474 ssh_runner.go:195] Run: crio --version
	I1216 06:38:48.630102 1639474 ssh_runner.go:195] Run: crio --version
	I1216 06:38:48.662511 1639474 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1216 06:38:48.665401 1639474 cli_runner.go:164] Run: docker network inspect functional-364120 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 06:38:48.681394 1639474 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 06:38:48.688428 1639474 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1216 06:38:48.691264 1639474 kubeadm.go:884] updating cluster {Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 06:38:48.691424 1639474 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 06:38:48.691501 1639474 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 06:38:48.730823 1639474 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 06:38:48.730835 1639474 crio.go:433] Images already preloaded, skipping extraction
	I1216 06:38:48.730892 1639474 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 06:38:48.756054 1639474 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 06:38:48.756075 1639474 cache_images.go:86] Images are preloaded, skipping loading
	I1216 06:38:48.756081 1639474 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1216 06:38:48.756185 1639474 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-364120 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 06:38:48.756284 1639474 ssh_runner.go:195] Run: crio config
	I1216 06:38:48.821920 1639474 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1216 06:38:48.821940 1639474 cni.go:84] Creating CNI manager for ""
	I1216 06:38:48.821953 1639474 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 06:38:48.821961 1639474 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 06:38:48.821989 1639474 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-364120 NodeName:functional-364120 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 06:38:48.822118 1639474 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-364120"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 06:38:48.822186 1639474 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 06:38:48.830098 1639474 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 06:38:48.830166 1639474 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 06:38:48.837393 1639474 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1216 06:38:48.849769 1639474 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 06:38:48.862224 1639474 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1216 06:38:48.875020 1639474 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 06:38:48.878641 1639474 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:38:48.988462 1639474 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:38:49.398022 1639474 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120 for IP: 192.168.49.2
	I1216 06:38:49.398033 1639474 certs.go:195] generating shared ca certs ...
	I1216 06:38:49.398047 1639474 certs.go:227] acquiring lock for ca certs: {Name:mkbf72d2e438185e2867d262e148d82e5455cccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:38:49.398216 1639474 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key
	I1216 06:38:49.398259 1639474 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key
	I1216 06:38:49.398266 1639474 certs.go:257] generating profile certs ...
	I1216 06:38:49.398355 1639474 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.key
	I1216 06:38:49.398397 1639474 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.key.a6be103a
	I1216 06:38:49.398442 1639474 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.key
	I1216 06:38:49.398557 1639474 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem (1338 bytes)
	W1216 06:38:49.398591 1639474 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255_empty.pem, impossibly tiny 0 bytes
	I1216 06:38:49.398598 1639474 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 06:38:49.398627 1639474 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem (1078 bytes)
	I1216 06:38:49.398648 1639474 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem (1123 bytes)
	I1216 06:38:49.398673 1639474 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem (1675 bytes)
	I1216 06:38:49.398722 1639474 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 06:38:49.399378 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 06:38:49.420435 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 06:38:49.440537 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 06:38:49.460786 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 06:38:49.480628 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 06:38:49.497487 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 06:38:49.514939 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 06:38:49.532313 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 06:38:49.550215 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem --> /usr/share/ca-certificates/1599255.pem (1338 bytes)
	I1216 06:38:49.580225 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /usr/share/ca-certificates/15992552.pem (1708 bytes)
	I1216 06:38:49.597583 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 06:38:49.615627 1639474 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 06:38:49.629067 1639474 ssh_runner.go:195] Run: openssl version
	I1216 06:38:49.635264 1639474 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1599255.pem
	I1216 06:38:49.642707 1639474 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1599255.pem /etc/ssl/certs/1599255.pem
	I1216 06:38:49.650527 1639474 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1599255.pem
	I1216 06:38:49.654313 1639474 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 06:24 /usr/share/ca-certificates/1599255.pem
	I1216 06:38:49.654369 1639474 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1599255.pem
	I1216 06:38:49.695142 1639474 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 06:38:49.702542 1639474 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15992552.pem
	I1216 06:38:49.709833 1639474 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15992552.pem /etc/ssl/certs/15992552.pem
	I1216 06:38:49.717202 1639474 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15992552.pem
	I1216 06:38:49.720835 1639474 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 06:24 /usr/share/ca-certificates/15992552.pem
	I1216 06:38:49.720891 1639474 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15992552.pem
	I1216 06:38:49.762100 1639474 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 06:38:49.769702 1639474 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:38:49.777475 1639474 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 06:38:49.785134 1639474 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:38:49.789017 1639474 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:38:49.789075 1639474 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:38:49.830097 1639474 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 06:38:49.837887 1639474 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 06:38:49.841718 1639474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 06:38:49.883003 1639474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 06:38:49.923792 1639474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 06:38:49.964873 1639474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 06:38:50.009367 1639474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 06:38:50.051701 1639474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 06:38:50.093263 1639474 kubeadm.go:401] StartCluster: {Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:38:50.093349 1639474 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 06:38:50.093423 1639474 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 06:38:50.120923 1639474 cri.go:89] found id: ""
	I1216 06:38:50.120988 1639474 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 06:38:50.128935 1639474 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 06:38:50.128944 1639474 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 06:38:50.129001 1639474 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 06:38:50.136677 1639474 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 06:38:50.137223 1639474 kubeconfig.go:125] found "functional-364120" server: "https://192.168.49.2:8441"
	I1216 06:38:50.138591 1639474 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 06:38:50.148403 1639474 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-16 06:24:13.753381452 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-16 06:38:48.871691407 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1216 06:38:50.148423 1639474 kubeadm.go:1161] stopping kube-system containers ...
	I1216 06:38:50.148434 1639474 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 06:38:50.148512 1639474 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 06:38:50.182168 1639474 cri.go:89] found id: ""
	I1216 06:38:50.182231 1639474 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 06:38:50.201521 1639474 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 06:38:50.209281 1639474 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 16 06:28 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 16 06:28 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec 16 06:28 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 16 06:28 /etc/kubernetes/scheduler.conf
	
	I1216 06:38:50.209338 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 06:38:50.217195 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 06:38:50.224648 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 06:38:50.224702 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 06:38:50.231990 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 06:38:50.239836 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 06:38:50.239894 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 06:38:50.247352 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 06:38:50.254862 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 06:38:50.254916 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 06:38:50.262178 1639474 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 06:38:50.270092 1639474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 06:38:50.316982 1639474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 06:38:51.327287 1639474 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.010279379s)
	I1216 06:38:51.327357 1639474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 06:38:51.524152 1639474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 06:38:51.584718 1639474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 06:38:51.627519 1639474 api_server.go:52] waiting for apiserver process to appear ...
	I1216 06:38:51.627603 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:52.127996 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:52.628298 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:53.128739 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:53.628621 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:54.128741 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:54.627831 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:55.128517 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:55.628413 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:56.127788 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:56.627801 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:57.128288 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:57.628401 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:58.128329 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:58.627998 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:59.127831 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:59.628547 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:00.128439 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:00.628540 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:01.128146 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:01.627790 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:02.128721 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:02.628766 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:03.127780 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:03.628489 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:04.128439 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:04.627784 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:05.128544 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:05.627790 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:06.128535 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:06.627955 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:07.127765 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:07.627817 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:08.128692 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:08.628069 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:09.127788 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:09.627921 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:10.128708 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:10.627689 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:11.127821 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:11.627890 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:12.127687 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:12.628412 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:13.128182 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:13.627796 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:14.128611 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:14.628298 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:15.127795 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:15.628147 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:16.127806 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:16.627762 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:17.127677 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:17.628043 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:18.127752 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:18.627697 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:19.128437 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:19.627779 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:20.128353 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:20.628739 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:21.128542 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:21.628449 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:22.127780 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:22.628679 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:23.128464 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:23.628609 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:24.127698 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:24.628073 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:25.128615 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:25.627743 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:26.127794 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:26.628605 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:27.128439 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:27.627806 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:28.128571 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:28.628042 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:29.128637 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:29.627742 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:30.128694 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:30.627803 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:31.127790 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:31.628497 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:32.127786 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:32.627780 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:33.127788 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:33.627974 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:34.128440 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:34.628685 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:35.128622 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:35.628715 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:36.128328 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:36.628129 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:37.127678 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:37.628187 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:38.128724 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:38.627765 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:39.127823 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:39.627834 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:40.128417 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:40.628784 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:41.128501 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:41.628458 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:42.128381 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:42.627888 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:43.128387 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:43.627769 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:44.128638 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:44.627687 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:45.128571 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:45.628346 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:46.128443 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:46.628500 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:47.128632 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:47.628608 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:48.128412 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:48.628099 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:49.128601 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:49.627888 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:50.127801 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:50.628098 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:51.127749 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:51.627803 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:39:51.627880 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:39:51.662321 1639474 cri.go:89] found id: ""
	I1216 06:39:51.662334 1639474 logs.go:282] 0 containers: []
	W1216 06:39:51.662341 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:39:51.662347 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:39:51.662418 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:39:51.693006 1639474 cri.go:89] found id: ""
	I1216 06:39:51.693020 1639474 logs.go:282] 0 containers: []
	W1216 06:39:51.693027 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:39:51.693032 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:39:51.693091 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:39:51.719156 1639474 cri.go:89] found id: ""
	I1216 06:39:51.719169 1639474 logs.go:282] 0 containers: []
	W1216 06:39:51.719176 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:39:51.719181 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:39:51.719237 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:39:51.745402 1639474 cri.go:89] found id: ""
	I1216 06:39:51.745416 1639474 logs.go:282] 0 containers: []
	W1216 06:39:51.745423 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:39:51.745429 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:39:51.745492 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:39:51.771770 1639474 cri.go:89] found id: ""
	I1216 06:39:51.771784 1639474 logs.go:282] 0 containers: []
	W1216 06:39:51.771791 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:39:51.771796 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:39:51.771854 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:39:51.797172 1639474 cri.go:89] found id: ""
	I1216 06:39:51.797186 1639474 logs.go:282] 0 containers: []
	W1216 06:39:51.797192 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:39:51.797198 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:39:51.797257 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:39:51.825478 1639474 cri.go:89] found id: ""
	I1216 06:39:51.825492 1639474 logs.go:282] 0 containers: []
	W1216 06:39:51.825499 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:39:51.825506 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:39:51.825516 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:39:51.897574 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:39:51.897593 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:39:51.925635 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:39:51.925652 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:39:51.993455 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:39:51.993477 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:39:52.027866 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:39:52.027883 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:39:52.096535 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:39:52.087042   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:52.087741   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:52.089643   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:52.090378   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:52.092367   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:39:52.087042   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:52.087741   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:52.089643   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:52.090378   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:52.092367   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:39:54.597178 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:54.607445 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:39:54.607507 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:39:54.634705 1639474 cri.go:89] found id: ""
	I1216 06:39:54.634719 1639474 logs.go:282] 0 containers: []
	W1216 06:39:54.634733 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:39:54.634739 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:39:54.634800 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:39:54.668209 1639474 cri.go:89] found id: ""
	I1216 06:39:54.668223 1639474 logs.go:282] 0 containers: []
	W1216 06:39:54.668230 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:39:54.668235 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:39:54.668293 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:39:54.703300 1639474 cri.go:89] found id: ""
	I1216 06:39:54.703314 1639474 logs.go:282] 0 containers: []
	W1216 06:39:54.703321 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:39:54.703326 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:39:54.703385 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:39:54.732154 1639474 cri.go:89] found id: ""
	I1216 06:39:54.732168 1639474 logs.go:282] 0 containers: []
	W1216 06:39:54.732175 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:39:54.732180 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:39:54.732241 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:39:54.758222 1639474 cri.go:89] found id: ""
	I1216 06:39:54.758237 1639474 logs.go:282] 0 containers: []
	W1216 06:39:54.758244 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:39:54.758249 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:39:54.758309 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:39:54.783433 1639474 cri.go:89] found id: ""
	I1216 06:39:54.783456 1639474 logs.go:282] 0 containers: []
	W1216 06:39:54.783463 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:39:54.783474 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:39:54.783544 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:39:54.811264 1639474 cri.go:89] found id: ""
	I1216 06:39:54.811277 1639474 logs.go:282] 0 containers: []
	W1216 06:39:54.811284 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:39:54.811291 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:39:54.811302 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:39:54.876784 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:39:54.876805 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:39:54.891733 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:39:54.891749 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:39:54.963951 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:39:54.956444   11053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:54.956899   11053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:54.958408   11053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:54.958719   11053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:54.960134   11053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:39:54.956444   11053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:54.956899   11053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:54.958408   11053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:54.958719   11053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:54.960134   11053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:39:54.963962 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:39:54.963975 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:39:55.036358 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:39:55.036380 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:39:57.569339 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:57.579596 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:39:57.579659 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:39:57.604959 1639474 cri.go:89] found id: ""
	I1216 06:39:57.604973 1639474 logs.go:282] 0 containers: []
	W1216 06:39:57.604980 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:39:57.604985 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:39:57.605045 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:39:57.630710 1639474 cri.go:89] found id: ""
	I1216 06:39:57.630725 1639474 logs.go:282] 0 containers: []
	W1216 06:39:57.630731 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:39:57.630736 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:39:57.630794 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:39:57.662734 1639474 cri.go:89] found id: ""
	I1216 06:39:57.662748 1639474 logs.go:282] 0 containers: []
	W1216 06:39:57.662756 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:39:57.662773 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:39:57.662838 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:39:57.699847 1639474 cri.go:89] found id: ""
	I1216 06:39:57.699868 1639474 logs.go:282] 0 containers: []
	W1216 06:39:57.699875 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:39:57.699880 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:39:57.699941 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:39:57.726549 1639474 cri.go:89] found id: ""
	I1216 06:39:57.726563 1639474 logs.go:282] 0 containers: []
	W1216 06:39:57.726570 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:39:57.726575 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:39:57.726639 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:39:57.752583 1639474 cri.go:89] found id: ""
	I1216 06:39:57.752597 1639474 logs.go:282] 0 containers: []
	W1216 06:39:57.752604 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:39:57.752609 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:39:57.752667 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:39:57.780752 1639474 cri.go:89] found id: ""
	I1216 06:39:57.780767 1639474 logs.go:282] 0 containers: []
	W1216 06:39:57.780774 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:39:57.780782 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:39:57.780793 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:39:57.846931 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:39:57.846952 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:39:57.862606 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:39:57.862623 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:39:57.928743 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:39:57.917946   11160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:57.918582   11160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:57.920325   11160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:57.920838   11160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:57.922560   11160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:39:57.917946   11160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:57.918582   11160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:57.920325   11160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:57.920838   11160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:57.922560   11160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:39:57.928764 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:39:57.928775 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:39:57.997232 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:39:57.997254 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:00.537687 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:00.558059 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:00.558144 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:00.594907 1639474 cri.go:89] found id: ""
	I1216 06:40:00.594929 1639474 logs.go:282] 0 containers: []
	W1216 06:40:00.594939 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:00.594953 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:00.595036 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:00.628243 1639474 cri.go:89] found id: ""
	I1216 06:40:00.628272 1639474 logs.go:282] 0 containers: []
	W1216 06:40:00.628280 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:00.628294 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:00.628377 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:00.667757 1639474 cri.go:89] found id: ""
	I1216 06:40:00.667773 1639474 logs.go:282] 0 containers: []
	W1216 06:40:00.667791 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:00.667797 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:00.667873 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:00.707304 1639474 cri.go:89] found id: ""
	I1216 06:40:00.707319 1639474 logs.go:282] 0 containers: []
	W1216 06:40:00.707327 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:00.707333 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:00.707413 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:00.742620 1639474 cri.go:89] found id: ""
	I1216 06:40:00.742636 1639474 logs.go:282] 0 containers: []
	W1216 06:40:00.742644 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:00.742650 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:00.742727 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:00.772404 1639474 cri.go:89] found id: ""
	I1216 06:40:00.772421 1639474 logs.go:282] 0 containers: []
	W1216 06:40:00.772429 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:00.772435 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:00.772526 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:00.800238 1639474 cri.go:89] found id: ""
	I1216 06:40:00.800253 1639474 logs.go:282] 0 containers: []
	W1216 06:40:00.800260 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:00.800268 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:00.800280 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:00.866967 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:00.866989 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:00.883111 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:00.883127 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:00.951359 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:00.942477   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:00.943153   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:00.944836   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:00.945488   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:00.947367   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:00.942477   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:00.943153   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:00.944836   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:00.945488   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:00.947367   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:00.951371 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:00.951382 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:01.020844 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:01.020870 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:03.552704 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:03.563452 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:03.563545 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:03.588572 1639474 cri.go:89] found id: ""
	I1216 06:40:03.588585 1639474 logs.go:282] 0 containers: []
	W1216 06:40:03.588592 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:03.588598 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:03.588665 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:03.617698 1639474 cri.go:89] found id: ""
	I1216 06:40:03.617712 1639474 logs.go:282] 0 containers: []
	W1216 06:40:03.617719 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:03.617724 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:03.617784 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:03.643270 1639474 cri.go:89] found id: ""
	I1216 06:40:03.643285 1639474 logs.go:282] 0 containers: []
	W1216 06:40:03.643291 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:03.643296 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:03.643356 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:03.679135 1639474 cri.go:89] found id: ""
	I1216 06:40:03.679148 1639474 logs.go:282] 0 containers: []
	W1216 06:40:03.679155 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:03.679160 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:03.679217 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:03.707978 1639474 cri.go:89] found id: ""
	I1216 06:40:03.707991 1639474 logs.go:282] 0 containers: []
	W1216 06:40:03.707998 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:03.708003 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:03.708071 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:03.741796 1639474 cri.go:89] found id: ""
	I1216 06:40:03.741821 1639474 logs.go:282] 0 containers: []
	W1216 06:40:03.741827 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:03.741832 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:03.741899 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:03.767959 1639474 cri.go:89] found id: ""
	I1216 06:40:03.767983 1639474 logs.go:282] 0 containers: []
	W1216 06:40:03.767991 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:03.767998 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:03.768009 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:03.833601 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:03.833622 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:03.848136 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:03.848154 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:03.911646 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:03.902948   11373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:03.903628   11373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:03.905247   11373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:03.905737   11373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:03.907239   11373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:03.902948   11373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:03.903628   11373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:03.905247   11373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:03.905737   11373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:03.907239   11373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:03.911661 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:03.911672 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:03.980874 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:03.980894 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:06.512671 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:06.522859 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:06.522944 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:06.552384 1639474 cri.go:89] found id: ""
	I1216 06:40:06.552399 1639474 logs.go:282] 0 containers: []
	W1216 06:40:06.552406 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:06.552411 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:06.552492 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:06.577262 1639474 cri.go:89] found id: ""
	I1216 06:40:06.577276 1639474 logs.go:282] 0 containers: []
	W1216 06:40:06.577293 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:06.577299 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:06.577357 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:06.603757 1639474 cri.go:89] found id: ""
	I1216 06:40:06.603772 1639474 logs.go:282] 0 containers: []
	W1216 06:40:06.603779 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:06.603784 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:06.603850 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:06.629717 1639474 cri.go:89] found id: ""
	I1216 06:40:06.629732 1639474 logs.go:282] 0 containers: []
	W1216 06:40:06.629751 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:06.629756 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:06.629846 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:06.665006 1639474 cri.go:89] found id: ""
	I1216 06:40:06.665031 1639474 logs.go:282] 0 containers: []
	W1216 06:40:06.665039 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:06.665044 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:06.665109 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:06.698777 1639474 cri.go:89] found id: ""
	I1216 06:40:06.698791 1639474 logs.go:282] 0 containers: []
	W1216 06:40:06.698807 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:06.698813 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:06.698879 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:06.727424 1639474 cri.go:89] found id: ""
	I1216 06:40:06.727448 1639474 logs.go:282] 0 containers: []
	W1216 06:40:06.727455 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:06.727464 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:06.727475 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:06.758535 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:06.758552 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:06.827915 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:06.827944 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:06.843925 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:06.843949 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:06.913118 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:06.904403   11493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:06.905354   11493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:06.907146   11493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:06.907549   11493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:06.909175   11493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:06.904403   11493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:06.905354   11493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:06.907146   11493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:06.907549   11493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:06.909175   11493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:06.913128 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:06.913140 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:09.481120 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:09.491592 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:09.491658 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:09.518336 1639474 cri.go:89] found id: ""
	I1216 06:40:09.518351 1639474 logs.go:282] 0 containers: []
	W1216 06:40:09.518358 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:09.518363 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:09.518423 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:09.547930 1639474 cri.go:89] found id: ""
	I1216 06:40:09.547943 1639474 logs.go:282] 0 containers: []
	W1216 06:40:09.547950 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:09.547955 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:09.548012 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:09.574921 1639474 cri.go:89] found id: ""
	I1216 06:40:09.574935 1639474 logs.go:282] 0 containers: []
	W1216 06:40:09.574942 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:09.574947 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:09.575008 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:09.600427 1639474 cri.go:89] found id: ""
	I1216 06:40:09.600495 1639474 logs.go:282] 0 containers: []
	W1216 06:40:09.600502 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:09.600508 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:09.600567 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:09.628992 1639474 cri.go:89] found id: ""
	I1216 06:40:09.629006 1639474 logs.go:282] 0 containers: []
	W1216 06:40:09.629015 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:09.629019 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:09.629080 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:09.667383 1639474 cri.go:89] found id: ""
	I1216 06:40:09.667397 1639474 logs.go:282] 0 containers: []
	W1216 06:40:09.667404 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:09.667409 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:09.667468 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:09.710169 1639474 cri.go:89] found id: ""
	I1216 06:40:09.710183 1639474 logs.go:282] 0 containers: []
	W1216 06:40:09.710190 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:09.710197 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:09.710208 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:09.776054 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:09.776075 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:09.790720 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:09.790736 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:09.855182 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:09.847489   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:09.848014   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:09.849514   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:09.849979   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:09.851407   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:09.847489   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:09.848014   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:09.849514   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:09.849979   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:09.851407   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:09.855192 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:09.855204 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:09.922382 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:09.922402 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:12.451670 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:12.461890 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:12.461962 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:12.486630 1639474 cri.go:89] found id: ""
	I1216 06:40:12.486644 1639474 logs.go:282] 0 containers: []
	W1216 06:40:12.486650 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:12.486657 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:12.486719 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:12.514531 1639474 cri.go:89] found id: ""
	I1216 06:40:12.514545 1639474 logs.go:282] 0 containers: []
	W1216 06:40:12.514551 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:12.514558 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:12.514621 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:12.541612 1639474 cri.go:89] found id: ""
	I1216 06:40:12.541627 1639474 logs.go:282] 0 containers: []
	W1216 06:40:12.541633 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:12.541638 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:12.541703 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:12.567638 1639474 cri.go:89] found id: ""
	I1216 06:40:12.567652 1639474 logs.go:282] 0 containers: []
	W1216 06:40:12.567659 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:12.567664 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:12.567723 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:12.593074 1639474 cri.go:89] found id: ""
	I1216 06:40:12.593089 1639474 logs.go:282] 0 containers: []
	W1216 06:40:12.593096 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:12.593101 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:12.593164 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:12.621872 1639474 cri.go:89] found id: ""
	I1216 06:40:12.621886 1639474 logs.go:282] 0 containers: []
	W1216 06:40:12.621893 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:12.621898 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:12.621954 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:12.658898 1639474 cri.go:89] found id: ""
	I1216 06:40:12.658912 1639474 logs.go:282] 0 containers: []
	W1216 06:40:12.658919 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:12.658927 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:12.658939 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:12.736529 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:12.727901   11689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:12.728778   11689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:12.730401   11689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:12.730782   11689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:12.732350   11689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:12.727901   11689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:12.728778   11689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:12.730401   11689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:12.730782   11689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:12.732350   11689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:12.736540 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:12.736551 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:12.804860 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:12.804881 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:12.834018 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:12.834036 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:12.903542 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:12.903564 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:15.418582 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:15.428941 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:15.429002 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:15.458081 1639474 cri.go:89] found id: ""
	I1216 06:40:15.458096 1639474 logs.go:282] 0 containers: []
	W1216 06:40:15.458103 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:15.458109 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:15.458172 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:15.487644 1639474 cri.go:89] found id: ""
	I1216 06:40:15.487658 1639474 logs.go:282] 0 containers: []
	W1216 06:40:15.487665 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:15.487670 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:15.487729 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:15.512552 1639474 cri.go:89] found id: ""
	I1216 06:40:15.512565 1639474 logs.go:282] 0 containers: []
	W1216 06:40:15.512572 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:15.512577 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:15.512646 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:15.537944 1639474 cri.go:89] found id: ""
	I1216 06:40:15.537958 1639474 logs.go:282] 0 containers: []
	W1216 06:40:15.537965 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:15.537971 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:15.538030 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:15.574197 1639474 cri.go:89] found id: ""
	I1216 06:40:15.574211 1639474 logs.go:282] 0 containers: []
	W1216 06:40:15.574218 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:15.574223 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:15.574289 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:15.603183 1639474 cri.go:89] found id: ""
	I1216 06:40:15.603197 1639474 logs.go:282] 0 containers: []
	W1216 06:40:15.603204 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:15.603209 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:15.603272 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:15.628682 1639474 cri.go:89] found id: ""
	I1216 06:40:15.628696 1639474 logs.go:282] 0 containers: []
	W1216 06:40:15.628703 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:15.628710 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:15.628720 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:15.716665 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:15.704236   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:15.709021   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:15.710701   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:15.711201   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:15.712773   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:15.704236   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:15.709021   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:15.710701   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:15.711201   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:15.712773   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:15.716676 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:15.716687 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:15.787785 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:15.787806 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:15.815751 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:15.815772 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:15.885879 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:15.885902 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:18.402627 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:18.413143 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:18.413213 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:18.439934 1639474 cri.go:89] found id: ""
	I1216 06:40:18.439948 1639474 logs.go:282] 0 containers: []
	W1216 06:40:18.439956 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:18.439961 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:18.440023 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:18.467477 1639474 cri.go:89] found id: ""
	I1216 06:40:18.467491 1639474 logs.go:282] 0 containers: []
	W1216 06:40:18.467498 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:18.467503 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:18.467564 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:18.492982 1639474 cri.go:89] found id: ""
	I1216 06:40:18.493002 1639474 logs.go:282] 0 containers: []
	W1216 06:40:18.493009 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:18.493013 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:18.493073 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:18.519158 1639474 cri.go:89] found id: ""
	I1216 06:40:18.519173 1639474 logs.go:282] 0 containers: []
	W1216 06:40:18.519180 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:18.519185 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:18.519250 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:18.544672 1639474 cri.go:89] found id: ""
	I1216 06:40:18.544687 1639474 logs.go:282] 0 containers: []
	W1216 06:40:18.544694 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:18.544699 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:18.544760 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:18.574100 1639474 cri.go:89] found id: ""
	I1216 06:40:18.574115 1639474 logs.go:282] 0 containers: []
	W1216 06:40:18.574122 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:18.574127 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:18.574190 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:18.600048 1639474 cri.go:89] found id: ""
	I1216 06:40:18.600062 1639474 logs.go:282] 0 containers: []
	W1216 06:40:18.600069 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:18.600077 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:18.600087 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:18.670680 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:18.670700 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:18.686391 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:18.686408 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:18.756196 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:18.747313   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:18.748097   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:18.749918   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:18.750488   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:18.752058   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:18.747313   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:18.748097   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:18.749918   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:18.750488   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:18.752058   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:18.756206 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:18.756218 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:18.824602 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:18.824623 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:21.356152 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:21.366658 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:21.366719 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:21.391945 1639474 cri.go:89] found id: ""
	I1216 06:40:21.391959 1639474 logs.go:282] 0 containers: []
	W1216 06:40:21.391966 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:21.391971 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:21.392032 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:21.419561 1639474 cri.go:89] found id: ""
	I1216 06:40:21.419581 1639474 logs.go:282] 0 containers: []
	W1216 06:40:21.419588 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:21.419593 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:21.419662 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:21.446105 1639474 cri.go:89] found id: ""
	I1216 06:40:21.446119 1639474 logs.go:282] 0 containers: []
	W1216 06:40:21.446135 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:21.446143 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:21.446212 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:21.472095 1639474 cri.go:89] found id: ""
	I1216 06:40:21.472110 1639474 logs.go:282] 0 containers: []
	W1216 06:40:21.472117 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:21.472123 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:21.472188 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:21.502751 1639474 cri.go:89] found id: ""
	I1216 06:40:21.502766 1639474 logs.go:282] 0 containers: []
	W1216 06:40:21.502773 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:21.502778 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:21.502841 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:21.528514 1639474 cri.go:89] found id: ""
	I1216 06:40:21.528538 1639474 logs.go:282] 0 containers: []
	W1216 06:40:21.528546 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:21.528551 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:21.528623 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:21.554279 1639474 cri.go:89] found id: ""
	I1216 06:40:21.554293 1639474 logs.go:282] 0 containers: []
	W1216 06:40:21.554300 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:21.554308 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:21.554319 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:21.622775 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:21.614774   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:21.615497   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:21.617104   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:21.617588   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:21.618999   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:21.614774   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:21.615497   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:21.617104   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:21.617588   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:21.618999   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:21.622786 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:21.622795 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:21.692973 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:21.692993 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:21.722066 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:21.722083 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:21.789953 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:21.789974 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:24.305740 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:24.315908 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:24.315976 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:24.344080 1639474 cri.go:89] found id: ""
	I1216 06:40:24.344095 1639474 logs.go:282] 0 containers: []
	W1216 06:40:24.344102 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:24.344108 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:24.344169 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:24.370623 1639474 cri.go:89] found id: ""
	I1216 06:40:24.370638 1639474 logs.go:282] 0 containers: []
	W1216 06:40:24.370645 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:24.370649 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:24.370714 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:24.397678 1639474 cri.go:89] found id: ""
	I1216 06:40:24.397701 1639474 logs.go:282] 0 containers: []
	W1216 06:40:24.397709 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:24.397714 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:24.397787 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:24.427585 1639474 cri.go:89] found id: ""
	I1216 06:40:24.427599 1639474 logs.go:282] 0 containers: []
	W1216 06:40:24.427607 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:24.427612 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:24.427685 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:24.457451 1639474 cri.go:89] found id: ""
	I1216 06:40:24.457465 1639474 logs.go:282] 0 containers: []
	W1216 06:40:24.457472 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:24.457489 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:24.457562 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:24.483717 1639474 cri.go:89] found id: ""
	I1216 06:40:24.483731 1639474 logs.go:282] 0 containers: []
	W1216 06:40:24.483738 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:24.483743 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:24.483817 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:24.509734 1639474 cri.go:89] found id: ""
	I1216 06:40:24.509748 1639474 logs.go:282] 0 containers: []
	W1216 06:40:24.509756 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:24.509763 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:24.509774 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:24.575490 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:24.575510 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:24.590459 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:24.590476 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:24.660840 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:24.649877   12107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:24.651257   12107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:24.652433   12107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:24.653590   12107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:24.656273   12107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:24.649877   12107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:24.651257   12107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:24.652433   12107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:24.653590   12107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:24.656273   12107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:24.660854 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:24.660865 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:24.742683 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:24.742706 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:27.272978 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:27.283654 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:27.283721 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:27.310045 1639474 cri.go:89] found id: ""
	I1216 06:40:27.310060 1639474 logs.go:282] 0 containers: []
	W1216 06:40:27.310067 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:27.310072 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:27.310132 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:27.339319 1639474 cri.go:89] found id: ""
	I1216 06:40:27.339334 1639474 logs.go:282] 0 containers: []
	W1216 06:40:27.339342 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:27.339347 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:27.339409 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:27.366885 1639474 cri.go:89] found id: ""
	I1216 06:40:27.366901 1639474 logs.go:282] 0 containers: []
	W1216 06:40:27.366910 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:27.366915 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:27.366980 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:27.392968 1639474 cri.go:89] found id: ""
	I1216 06:40:27.392982 1639474 logs.go:282] 0 containers: []
	W1216 06:40:27.392989 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:27.392994 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:27.393072 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:27.425432 1639474 cri.go:89] found id: ""
	I1216 06:40:27.425446 1639474 logs.go:282] 0 containers: []
	W1216 06:40:27.425466 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:27.425471 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:27.425538 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:27.454875 1639474 cri.go:89] found id: ""
	I1216 06:40:27.454899 1639474 logs.go:282] 0 containers: []
	W1216 06:40:27.454906 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:27.454912 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:27.454982 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:27.480348 1639474 cri.go:89] found id: ""
	I1216 06:40:27.480363 1639474 logs.go:282] 0 containers: []
	W1216 06:40:27.480370 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:27.480378 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:27.480389 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:27.550687 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:27.550715 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:27.566692 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:27.566711 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:27.634204 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:27.625961   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:27.626967   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:27.628010   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:27.628680   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:27.630265   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:27.625961   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:27.626967   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:27.628010   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:27.628680   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:27.630265   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:27.634214 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:27.634227 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:27.706020 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:27.706040 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:30.238169 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:30.248488 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:30.248550 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:30.274527 1639474 cri.go:89] found id: ""
	I1216 06:40:30.274542 1639474 logs.go:282] 0 containers: []
	W1216 06:40:30.274549 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:30.274554 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:30.274615 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:30.300592 1639474 cri.go:89] found id: ""
	I1216 06:40:30.300610 1639474 logs.go:282] 0 containers: []
	W1216 06:40:30.300617 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:30.300624 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:30.300693 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:30.327939 1639474 cri.go:89] found id: ""
	I1216 06:40:30.327966 1639474 logs.go:282] 0 containers: []
	W1216 06:40:30.327973 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:30.327978 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:30.328040 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:30.358884 1639474 cri.go:89] found id: ""
	I1216 06:40:30.358898 1639474 logs.go:282] 0 containers: []
	W1216 06:40:30.358905 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:30.358910 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:30.358968 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:30.387991 1639474 cri.go:89] found id: ""
	I1216 06:40:30.388005 1639474 logs.go:282] 0 containers: []
	W1216 06:40:30.388012 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:30.388017 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:30.388090 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:30.413034 1639474 cri.go:89] found id: ""
	I1216 06:40:30.413048 1639474 logs.go:282] 0 containers: []
	W1216 06:40:30.413055 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:30.413059 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:30.413118 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:30.449975 1639474 cri.go:89] found id: ""
	I1216 06:40:30.450018 1639474 logs.go:282] 0 containers: []
	W1216 06:40:30.450034 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:30.450041 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:30.450053 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:30.466503 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:30.466521 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:30.528819 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:30.520846   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:30.521380   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:30.522897   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:30.523339   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:30.524879   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:30.520846   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:30.521380   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:30.522897   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:30.523339   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:30.524879   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:30.528828 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:30.528839 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:30.597696 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:30.597715 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:30.625300 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:30.625317 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:33.194250 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:33.204305 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:33.204368 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:33.229739 1639474 cri.go:89] found id: ""
	I1216 06:40:33.229753 1639474 logs.go:282] 0 containers: []
	W1216 06:40:33.229760 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:33.229765 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:33.229821 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:33.254131 1639474 cri.go:89] found id: ""
	I1216 06:40:33.254144 1639474 logs.go:282] 0 containers: []
	W1216 06:40:33.254151 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:33.254156 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:33.254214 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:33.279859 1639474 cri.go:89] found id: ""
	I1216 06:40:33.279881 1639474 logs.go:282] 0 containers: []
	W1216 06:40:33.279889 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:33.279894 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:33.279956 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:33.305951 1639474 cri.go:89] found id: ""
	I1216 06:40:33.305966 1639474 logs.go:282] 0 containers: []
	W1216 06:40:33.305973 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:33.305978 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:33.306037 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:33.335767 1639474 cri.go:89] found id: ""
	I1216 06:40:33.335781 1639474 logs.go:282] 0 containers: []
	W1216 06:40:33.335789 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:33.335793 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:33.335859 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:33.362761 1639474 cri.go:89] found id: ""
	I1216 06:40:33.362774 1639474 logs.go:282] 0 containers: []
	W1216 06:40:33.362781 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:33.362786 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:33.362843 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:33.389319 1639474 cri.go:89] found id: ""
	I1216 06:40:33.389334 1639474 logs.go:282] 0 containers: []
	W1216 06:40:33.389340 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:33.389348 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:33.389359 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:33.453913 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:33.444788   12421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:33.445454   12421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:33.447138   12421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:33.447727   12421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:33.449700   12421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:33.444788   12421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:33.445454   12421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:33.447138   12421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:33.447727   12421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:33.449700   12421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:33.453925 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:33.453936 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:33.522875 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:33.522895 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:33.556966 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:33.556981 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:33.624329 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:33.624350 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:36.139596 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:36.150559 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:36.150621 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:36.176931 1639474 cri.go:89] found id: ""
	I1216 06:40:36.176946 1639474 logs.go:282] 0 containers: []
	W1216 06:40:36.176954 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:36.176959 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:36.177023 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:36.203410 1639474 cri.go:89] found id: ""
	I1216 06:40:36.203424 1639474 logs.go:282] 0 containers: []
	W1216 06:40:36.203430 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:36.203435 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:36.203498 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:36.232378 1639474 cri.go:89] found id: ""
	I1216 06:40:36.232393 1639474 logs.go:282] 0 containers: []
	W1216 06:40:36.232399 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:36.232407 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:36.232504 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:36.258614 1639474 cri.go:89] found id: ""
	I1216 06:40:36.258636 1639474 logs.go:282] 0 containers: []
	W1216 06:40:36.258644 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:36.258649 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:36.258711 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:36.287134 1639474 cri.go:89] found id: ""
	I1216 06:40:36.287149 1639474 logs.go:282] 0 containers: []
	W1216 06:40:36.287156 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:36.287161 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:36.287225 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:36.316901 1639474 cri.go:89] found id: ""
	I1216 06:40:36.316915 1639474 logs.go:282] 0 containers: []
	W1216 06:40:36.316922 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:36.316927 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:36.316991 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:36.343964 1639474 cri.go:89] found id: ""
	I1216 06:40:36.343979 1639474 logs.go:282] 0 containers: []
	W1216 06:40:36.343988 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:36.343997 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:36.344009 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:36.409151 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:36.400502   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:36.401298   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:36.402984   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:36.403504   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:36.405133   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:36.400502   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:36.401298   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:36.402984   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:36.403504   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:36.405133   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:36.409161 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:36.409172 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:36.477694 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:36.477717 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:36.507334 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:36.507355 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:36.577747 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:36.577766 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:39.094282 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:39.105025 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:39.105089 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:39.131493 1639474 cri.go:89] found id: ""
	I1216 06:40:39.131507 1639474 logs.go:282] 0 containers: []
	W1216 06:40:39.131514 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:39.131525 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:39.131586 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:39.163796 1639474 cri.go:89] found id: ""
	I1216 06:40:39.163811 1639474 logs.go:282] 0 containers: []
	W1216 06:40:39.163819 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:39.163823 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:39.163886 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:39.191137 1639474 cri.go:89] found id: ""
	I1216 06:40:39.191152 1639474 logs.go:282] 0 containers: []
	W1216 06:40:39.191160 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:39.191165 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:39.191226 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:39.217834 1639474 cri.go:89] found id: ""
	I1216 06:40:39.217850 1639474 logs.go:282] 0 containers: []
	W1216 06:40:39.217857 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:39.217862 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:39.217926 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:39.244937 1639474 cri.go:89] found id: ""
	I1216 06:40:39.244951 1639474 logs.go:282] 0 containers: []
	W1216 06:40:39.244958 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:39.244963 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:39.245026 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:39.274684 1639474 cri.go:89] found id: ""
	I1216 06:40:39.274698 1639474 logs.go:282] 0 containers: []
	W1216 06:40:39.274706 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:39.274711 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:39.274774 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:39.302124 1639474 cri.go:89] found id: ""
	I1216 06:40:39.302138 1639474 logs.go:282] 0 containers: []
	W1216 06:40:39.302145 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:39.302153 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:39.302163 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:39.370146 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:39.370166 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:39.397930 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:39.397946 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:39.469905 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:39.469925 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:39.487153 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:39.487169 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:39.556831 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:39.547994   12655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:39.548793   12655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:39.550966   12655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:39.551537   12655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:39.552926   12655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:39.547994   12655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:39.548793   12655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:39.550966   12655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:39.551537   12655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:39.552926   12655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:42.057113 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:42.068649 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:42.068719 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:42.098202 1639474 cri.go:89] found id: ""
	I1216 06:40:42.098217 1639474 logs.go:282] 0 containers: []
	W1216 06:40:42.098224 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:42.098229 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:42.098294 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:42.130680 1639474 cri.go:89] found id: ""
	I1216 06:40:42.130696 1639474 logs.go:282] 0 containers: []
	W1216 06:40:42.130703 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:42.130708 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:42.130779 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:42.167131 1639474 cri.go:89] found id: ""
	I1216 06:40:42.167146 1639474 logs.go:282] 0 containers: []
	W1216 06:40:42.167153 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:42.167160 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:42.167230 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:42.197324 1639474 cri.go:89] found id: ""
	I1216 06:40:42.197339 1639474 logs.go:282] 0 containers: []
	W1216 06:40:42.197346 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:42.197352 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:42.197420 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:42.225831 1639474 cri.go:89] found id: ""
	I1216 06:40:42.225848 1639474 logs.go:282] 0 containers: []
	W1216 06:40:42.225856 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:42.225861 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:42.225930 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:42.257762 1639474 cri.go:89] found id: ""
	I1216 06:40:42.257777 1639474 logs.go:282] 0 containers: []
	W1216 06:40:42.257786 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:42.257792 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:42.257852 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:42.284492 1639474 cri.go:89] found id: ""
	I1216 06:40:42.284507 1639474 logs.go:282] 0 containers: []
	W1216 06:40:42.284515 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:42.284523 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:42.284535 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:42.351298 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:42.351319 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:42.367176 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:42.367193 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:42.433375 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:42.424339   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:42.425590   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:42.426458   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:42.427469   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:42.429024   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:42.424339   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:42.425590   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:42.426458   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:42.427469   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:42.429024   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:42.433386 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:42.433396 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:42.500708 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:42.500729 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:45.031368 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:45.055503 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:45.055570 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:45.098074 1639474 cri.go:89] found id: ""
	I1216 06:40:45.098091 1639474 logs.go:282] 0 containers: []
	W1216 06:40:45.098100 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:45.098105 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:45.098174 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:45.144642 1639474 cri.go:89] found id: ""
	I1216 06:40:45.144658 1639474 logs.go:282] 0 containers: []
	W1216 06:40:45.144666 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:45.144671 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:45.144743 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:45.177748 1639474 cri.go:89] found id: ""
	I1216 06:40:45.177777 1639474 logs.go:282] 0 containers: []
	W1216 06:40:45.177786 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:45.177792 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:45.177875 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:45.237332 1639474 cri.go:89] found id: ""
	I1216 06:40:45.237350 1639474 logs.go:282] 0 containers: []
	W1216 06:40:45.237368 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:45.237373 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:45.237462 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:45.277580 1639474 cri.go:89] found id: ""
	I1216 06:40:45.277608 1639474 logs.go:282] 0 containers: []
	W1216 06:40:45.277625 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:45.277631 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:45.277787 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:45.319169 1639474 cri.go:89] found id: ""
	I1216 06:40:45.319184 1639474 logs.go:282] 0 containers: []
	W1216 06:40:45.319192 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:45.319198 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:45.319268 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:45.355649 1639474 cri.go:89] found id: ""
	I1216 06:40:45.355663 1639474 logs.go:282] 0 containers: []
	W1216 06:40:45.355672 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:45.355691 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:45.355723 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:45.423762 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:45.423783 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:45.451985 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:45.452002 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:45.516593 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:45.516613 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:45.531478 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:45.531500 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:45.596800 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:45.588341   12868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:45.588774   12868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:45.590507   12868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:45.590979   12868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:45.592493   12868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:45.588341   12868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:45.588774   12868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:45.590507   12868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:45.590979   12868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:45.592493   12868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:48.098483 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:48.108786 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:48.108849 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:48.134211 1639474 cri.go:89] found id: ""
	I1216 06:40:48.134225 1639474 logs.go:282] 0 containers: []
	W1216 06:40:48.134232 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:48.134237 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:48.134297 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:48.160517 1639474 cri.go:89] found id: ""
	I1216 06:40:48.160531 1639474 logs.go:282] 0 containers: []
	W1216 06:40:48.160538 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:48.160544 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:48.160604 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:48.185669 1639474 cri.go:89] found id: ""
	I1216 06:40:48.185682 1639474 logs.go:282] 0 containers: []
	W1216 06:40:48.185690 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:48.185694 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:48.185754 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:48.210265 1639474 cri.go:89] found id: ""
	I1216 06:40:48.210279 1639474 logs.go:282] 0 containers: []
	W1216 06:40:48.210286 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:48.210291 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:48.210403 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:48.234252 1639474 cri.go:89] found id: ""
	I1216 06:40:48.234267 1639474 logs.go:282] 0 containers: []
	W1216 06:40:48.234274 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:48.234279 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:48.234339 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:48.259358 1639474 cri.go:89] found id: ""
	I1216 06:40:48.259372 1639474 logs.go:282] 0 containers: []
	W1216 06:40:48.259379 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:48.259384 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:48.259443 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:48.288697 1639474 cri.go:89] found id: ""
	I1216 06:40:48.288713 1639474 logs.go:282] 0 containers: []
	W1216 06:40:48.288720 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:48.288728 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:48.288738 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:48.357686 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:48.357712 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:48.372954 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:48.372973 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:48.434679 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:48.426723   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:48.427402   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:48.428895   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:48.429341   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:48.430781   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:48.426723   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:48.427402   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:48.428895   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:48.429341   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:48.430781   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:48.434689 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:48.434701 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:48.505103 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:48.505127 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:51.033411 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:51.043540 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:51.043600 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:51.070010 1639474 cri.go:89] found id: ""
	I1216 06:40:51.070025 1639474 logs.go:282] 0 containers: []
	W1216 06:40:51.070032 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:51.070037 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:51.070100 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:51.096267 1639474 cri.go:89] found id: ""
	I1216 06:40:51.096282 1639474 logs.go:282] 0 containers: []
	W1216 06:40:51.096290 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:51.096295 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:51.096356 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:51.122692 1639474 cri.go:89] found id: ""
	I1216 06:40:51.122707 1639474 logs.go:282] 0 containers: []
	W1216 06:40:51.122714 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:51.122719 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:51.122784 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:51.152647 1639474 cri.go:89] found id: ""
	I1216 06:40:51.152662 1639474 logs.go:282] 0 containers: []
	W1216 06:40:51.152670 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:51.152680 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:51.152744 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:51.180574 1639474 cri.go:89] found id: ""
	I1216 06:40:51.180589 1639474 logs.go:282] 0 containers: []
	W1216 06:40:51.180597 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:51.180602 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:51.180668 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:51.206605 1639474 cri.go:89] found id: ""
	I1216 06:40:51.206619 1639474 logs.go:282] 0 containers: []
	W1216 06:40:51.206626 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:51.206631 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:51.206695 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:51.231786 1639474 cri.go:89] found id: ""
	I1216 06:40:51.231809 1639474 logs.go:282] 0 containers: []
	W1216 06:40:51.231817 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:51.231825 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:51.231835 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:51.297100 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:51.297120 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:51.311954 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:51.311972 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:51.379683 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:51.371735   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:51.372335   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:51.373907   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:51.374265   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:51.375750   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:51.371735   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:51.372335   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:51.373907   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:51.374265   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:51.375750   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:51.379694 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:51.379706 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:51.447537 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:51.447557 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:53.983520 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:53.993929 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:53.993987 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:54.023619 1639474 cri.go:89] found id: ""
	I1216 06:40:54.023634 1639474 logs.go:282] 0 containers: []
	W1216 06:40:54.023640 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:54.023645 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:54.023708 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:54.049842 1639474 cri.go:89] found id: ""
	I1216 06:40:54.049857 1639474 logs.go:282] 0 containers: []
	W1216 06:40:54.049864 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:54.049869 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:54.049934 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:54.077181 1639474 cri.go:89] found id: ""
	I1216 06:40:54.077205 1639474 logs.go:282] 0 containers: []
	W1216 06:40:54.077212 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:54.077217 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:54.077280 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:54.105267 1639474 cri.go:89] found id: ""
	I1216 06:40:54.105282 1639474 logs.go:282] 0 containers: []
	W1216 06:40:54.105291 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:54.105297 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:54.105363 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:54.130851 1639474 cri.go:89] found id: ""
	I1216 06:40:54.130874 1639474 logs.go:282] 0 containers: []
	W1216 06:40:54.130881 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:54.130886 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:54.130949 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:54.156895 1639474 cri.go:89] found id: ""
	I1216 06:40:54.156910 1639474 logs.go:282] 0 containers: []
	W1216 06:40:54.156917 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:54.156923 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:54.156983 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:54.183545 1639474 cri.go:89] found id: ""
	I1216 06:40:54.183560 1639474 logs.go:282] 0 containers: []
	W1216 06:40:54.183566 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:54.183574 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:54.183584 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:54.249489 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:54.249509 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:54.263930 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:54.263947 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:54.329743 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:54.321698   13175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:54.322538   13175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:54.324144   13175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:54.324622   13175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:54.326115   13175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:54.321698   13175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:54.322538   13175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:54.324144   13175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:54.324622   13175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:54.326115   13175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:54.329755 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:54.329766 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:54.396582 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:54.396603 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:56.928591 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:56.939856 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:56.939917 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:56.967210 1639474 cri.go:89] found id: ""
	I1216 06:40:56.967225 1639474 logs.go:282] 0 containers: []
	W1216 06:40:56.967232 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:56.967237 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:56.967298 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:56.993815 1639474 cri.go:89] found id: ""
	I1216 06:40:56.993829 1639474 logs.go:282] 0 containers: []
	W1216 06:40:56.993836 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:56.993841 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:56.993898 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:57.029670 1639474 cri.go:89] found id: ""
	I1216 06:40:57.029684 1639474 logs.go:282] 0 containers: []
	W1216 06:40:57.029691 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:57.029696 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:57.029754 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:57.054833 1639474 cri.go:89] found id: ""
	I1216 06:40:57.054847 1639474 logs.go:282] 0 containers: []
	W1216 06:40:57.054854 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:57.054859 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:57.054924 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:57.079670 1639474 cri.go:89] found id: ""
	I1216 06:40:57.079684 1639474 logs.go:282] 0 containers: []
	W1216 06:40:57.079691 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:57.079696 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:57.079761 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:57.104048 1639474 cri.go:89] found id: ""
	I1216 06:40:57.104062 1639474 logs.go:282] 0 containers: []
	W1216 06:40:57.104069 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:57.104074 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:57.104142 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:57.129442 1639474 cri.go:89] found id: ""
	I1216 06:40:57.129462 1639474 logs.go:282] 0 containers: []
	W1216 06:40:57.129469 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:57.129477 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:57.129487 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:57.197165 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:57.197185 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:57.226479 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:57.226498 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:57.292031 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:57.292053 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:57.306889 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:57.306905 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:57.372214 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:57.363236   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:57.363924   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:57.365612   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:57.366208   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:57.367883   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:57.363236   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:57.363924   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:57.365612   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:57.366208   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:57.367883   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:59.872521 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:59.882455 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:59.882521 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:59.913998 1639474 cri.go:89] found id: ""
	I1216 06:40:59.914012 1639474 logs.go:282] 0 containers: []
	W1216 06:40:59.914020 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:59.914025 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:59.914091 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:59.942569 1639474 cri.go:89] found id: ""
	I1216 06:40:59.942583 1639474 logs.go:282] 0 containers: []
	W1216 06:40:59.942589 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:59.942594 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:59.942665 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:59.970700 1639474 cri.go:89] found id: ""
	I1216 06:40:59.970729 1639474 logs.go:282] 0 containers: []
	W1216 06:40:59.970736 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:59.970742 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:59.970809 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:59.997067 1639474 cri.go:89] found id: ""
	I1216 06:40:59.997085 1639474 logs.go:282] 0 containers: []
	W1216 06:40:59.997092 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:59.997098 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:59.997163 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:00.191988 1639474 cri.go:89] found id: ""
	I1216 06:41:00.192005 1639474 logs.go:282] 0 containers: []
	W1216 06:41:00.192013 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:00.192018 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:00.192086 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:00.277353 1639474 cri.go:89] found id: ""
	I1216 06:41:00.277369 1639474 logs.go:282] 0 containers: []
	W1216 06:41:00.277377 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:00.277382 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:00.277497 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:00.317655 1639474 cri.go:89] found id: ""
	I1216 06:41:00.317680 1639474 logs.go:282] 0 containers: []
	W1216 06:41:00.317688 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:00.317697 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:00.317710 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:00.373222 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:00.373244 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:00.450289 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:00.450312 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:00.467305 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:00.467321 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:00.537520 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:00.528959   13394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:00.529630   13394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:00.531328   13394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:00.531879   13394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:00.533548   13394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:00.528959   13394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:00.529630   13394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:00.531328   13394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:00.531879   13394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:00.533548   13394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:00.537529 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:00.537544 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:03.105837 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:03.116211 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:03.116271 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:03.140992 1639474 cri.go:89] found id: ""
	I1216 06:41:03.141005 1639474 logs.go:282] 0 containers: []
	W1216 06:41:03.141013 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:03.141018 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:03.141077 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:03.169832 1639474 cri.go:89] found id: ""
	I1216 06:41:03.169846 1639474 logs.go:282] 0 containers: []
	W1216 06:41:03.169853 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:03.169858 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:03.169923 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:03.200294 1639474 cri.go:89] found id: ""
	I1216 06:41:03.200308 1639474 logs.go:282] 0 containers: []
	W1216 06:41:03.200316 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:03.200321 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:03.200422 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:03.226615 1639474 cri.go:89] found id: ""
	I1216 06:41:03.226629 1639474 logs.go:282] 0 containers: []
	W1216 06:41:03.226635 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:03.226641 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:03.226702 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:03.252099 1639474 cri.go:89] found id: ""
	I1216 06:41:03.252113 1639474 logs.go:282] 0 containers: []
	W1216 06:41:03.252120 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:03.252125 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:03.252186 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:03.277049 1639474 cri.go:89] found id: ""
	I1216 06:41:03.277064 1639474 logs.go:282] 0 containers: []
	W1216 06:41:03.277070 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:03.277075 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:03.277136 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:03.302834 1639474 cri.go:89] found id: ""
	I1216 06:41:03.302850 1639474 logs.go:282] 0 containers: []
	W1216 06:41:03.302857 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:03.302865 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:03.302877 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:03.369696 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:03.369719 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:03.384336 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:03.384358 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:03.450962 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:03.442704   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:03.443315   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:03.445009   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:03.445434   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:03.446924   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:03.442704   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:03.443315   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:03.445009   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:03.445434   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:03.446924   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:03.450973 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:03.450985 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:03.522274 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:03.522297 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:06.053196 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:06.063351 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:06.063422 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:06.089075 1639474 cri.go:89] found id: ""
	I1216 06:41:06.089089 1639474 logs.go:282] 0 containers: []
	W1216 06:41:06.089096 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:06.089102 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:06.089162 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:06.118245 1639474 cri.go:89] found id: ""
	I1216 06:41:06.118259 1639474 logs.go:282] 0 containers: []
	W1216 06:41:06.118266 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:06.118271 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:06.118336 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:06.143697 1639474 cri.go:89] found id: ""
	I1216 06:41:06.143724 1639474 logs.go:282] 0 containers: []
	W1216 06:41:06.143732 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:06.143737 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:06.143805 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:06.169572 1639474 cri.go:89] found id: ""
	I1216 06:41:06.169586 1639474 logs.go:282] 0 containers: []
	W1216 06:41:06.169594 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:06.169599 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:06.169661 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:06.195851 1639474 cri.go:89] found id: ""
	I1216 06:41:06.195867 1639474 logs.go:282] 0 containers: []
	W1216 06:41:06.195874 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:06.195879 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:06.195942 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:06.223692 1639474 cri.go:89] found id: ""
	I1216 06:41:06.223707 1639474 logs.go:282] 0 containers: []
	W1216 06:41:06.223715 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:06.223720 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:06.223780 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:06.249649 1639474 cri.go:89] found id: ""
	I1216 06:41:06.249679 1639474 logs.go:282] 0 containers: []
	W1216 06:41:06.249686 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:06.249694 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:06.249705 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:06.314738 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:06.314759 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:06.329678 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:06.329695 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:06.395023 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:06.386200   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:06.387084   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:06.388942   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:06.389302   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:06.390896   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:06.386200   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:06.387084   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:06.388942   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:06.389302   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:06.390896   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:06.395034 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:06.395046 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:06.463667 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:06.463687 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:08.992603 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:09.003856 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:09.003937 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:09.031578 1639474 cri.go:89] found id: ""
	I1216 06:41:09.031592 1639474 logs.go:282] 0 containers: []
	W1216 06:41:09.031599 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:09.031604 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:09.031663 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:09.056946 1639474 cri.go:89] found id: ""
	I1216 06:41:09.056961 1639474 logs.go:282] 0 containers: []
	W1216 06:41:09.056969 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:09.056974 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:09.057035 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:09.082038 1639474 cri.go:89] found id: ""
	I1216 06:41:09.082053 1639474 logs.go:282] 0 containers: []
	W1216 06:41:09.082060 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:09.082065 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:09.082125 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:09.107847 1639474 cri.go:89] found id: ""
	I1216 06:41:09.107862 1639474 logs.go:282] 0 containers: []
	W1216 06:41:09.107869 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:09.107874 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:09.107933 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:09.133995 1639474 cri.go:89] found id: ""
	I1216 06:41:09.134010 1639474 logs.go:282] 0 containers: []
	W1216 06:41:09.134017 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:09.134022 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:09.134086 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:09.159110 1639474 cri.go:89] found id: ""
	I1216 06:41:09.159125 1639474 logs.go:282] 0 containers: []
	W1216 06:41:09.159132 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:09.159137 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:09.159197 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:09.189150 1639474 cri.go:89] found id: ""
	I1216 06:41:09.189164 1639474 logs.go:282] 0 containers: []
	W1216 06:41:09.189171 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:09.189179 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:09.189190 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:09.251080 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:09.242202   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:09.242596   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:09.244208   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:09.244880   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:09.246572   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:09.242202   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:09.242596   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:09.244208   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:09.244880   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:09.246572   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:09.251090 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:09.251102 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:09.318859 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:09.318879 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:09.349358 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:09.349381 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:09.418362 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:09.418385 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:11.933431 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:11.944248 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:11.944309 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:11.976909 1639474 cri.go:89] found id: ""
	I1216 06:41:11.976924 1639474 logs.go:282] 0 containers: []
	W1216 06:41:11.976932 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:11.976937 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:11.976998 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:12.011035 1639474 cri.go:89] found id: ""
	I1216 06:41:12.011050 1639474 logs.go:282] 0 containers: []
	W1216 06:41:12.011057 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:12.011062 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:12.011126 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:12.041473 1639474 cri.go:89] found id: ""
	I1216 06:41:12.041495 1639474 logs.go:282] 0 containers: []
	W1216 06:41:12.041502 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:12.041508 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:12.041571 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:12.066438 1639474 cri.go:89] found id: ""
	I1216 06:41:12.066463 1639474 logs.go:282] 0 containers: []
	W1216 06:41:12.066471 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:12.066477 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:12.066542 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:12.090884 1639474 cri.go:89] found id: ""
	I1216 06:41:12.090899 1639474 logs.go:282] 0 containers: []
	W1216 06:41:12.090906 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:12.090911 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:12.090970 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:12.116491 1639474 cri.go:89] found id: ""
	I1216 06:41:12.116506 1639474 logs.go:282] 0 containers: []
	W1216 06:41:12.116516 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:12.116522 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:12.116580 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:12.142941 1639474 cri.go:89] found id: ""
	I1216 06:41:12.142956 1639474 logs.go:282] 0 containers: []
	W1216 06:41:12.142963 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:12.142971 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:12.142982 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:12.172125 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:12.172142 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:12.240713 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:12.240734 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:12.255672 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:12.255689 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:12.321167 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:12.312200   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:12.313096   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:12.315001   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:12.315663   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:12.317261   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:12.312200   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:12.313096   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:12.315001   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:12.315663   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:12.317261   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:12.321177 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:12.321190 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:14.894286 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:14.904324 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:14.904383 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:14.938397 1639474 cri.go:89] found id: ""
	I1216 06:41:14.938421 1639474 logs.go:282] 0 containers: []
	W1216 06:41:14.938429 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:14.938434 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:14.938501 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:14.967116 1639474 cri.go:89] found id: ""
	I1216 06:41:14.967130 1639474 logs.go:282] 0 containers: []
	W1216 06:41:14.967137 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:14.967141 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:14.967203 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:14.993300 1639474 cri.go:89] found id: ""
	I1216 06:41:14.993324 1639474 logs.go:282] 0 containers: []
	W1216 06:41:14.993331 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:14.993336 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:14.993414 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:15.065324 1639474 cri.go:89] found id: ""
	I1216 06:41:15.065347 1639474 logs.go:282] 0 containers: []
	W1216 06:41:15.065374 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:15.065379 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:15.065453 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:15.094230 1639474 cri.go:89] found id: ""
	I1216 06:41:15.094254 1639474 logs.go:282] 0 containers: []
	W1216 06:41:15.094262 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:15.094268 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:15.094334 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:15.125543 1639474 cri.go:89] found id: ""
	I1216 06:41:15.125557 1639474 logs.go:282] 0 containers: []
	W1216 06:41:15.125567 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:15.125574 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:15.125641 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:15.153256 1639474 cri.go:89] found id: ""
	I1216 06:41:15.153271 1639474 logs.go:282] 0 containers: []
	W1216 06:41:15.153280 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:15.153287 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:15.153298 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:15.220613 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:15.220633 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:15.235620 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:15.235637 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:15.298217 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:15.289454   13906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:15.290253   13906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:15.291923   13906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:15.292609   13906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:15.294226   13906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:15.289454   13906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:15.290253   13906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:15.291923   13906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:15.292609   13906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:15.294226   13906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:15.298227 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:15.298238 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:15.366620 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:15.366643 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:17.896595 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:17.908386 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:17.908446 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:17.937743 1639474 cri.go:89] found id: ""
	I1216 06:41:17.937757 1639474 logs.go:282] 0 containers: []
	W1216 06:41:17.937763 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:17.937768 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:17.937827 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:17.970369 1639474 cri.go:89] found id: ""
	I1216 06:41:17.970383 1639474 logs.go:282] 0 containers: []
	W1216 06:41:17.970390 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:17.970395 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:17.970453 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:17.996832 1639474 cri.go:89] found id: ""
	I1216 06:41:17.996846 1639474 logs.go:282] 0 containers: []
	W1216 06:41:17.996853 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:17.996858 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:17.996924 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:18.038145 1639474 cri.go:89] found id: ""
	I1216 06:41:18.038159 1639474 logs.go:282] 0 containers: []
	W1216 06:41:18.038167 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:18.038172 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:18.038235 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:18.064225 1639474 cri.go:89] found id: ""
	I1216 06:41:18.064239 1639474 logs.go:282] 0 containers: []
	W1216 06:41:18.064248 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:18.064254 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:18.064314 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:18.094775 1639474 cri.go:89] found id: ""
	I1216 06:41:18.094789 1639474 logs.go:282] 0 containers: []
	W1216 06:41:18.094797 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:18.094802 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:18.094863 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:18.120874 1639474 cri.go:89] found id: ""
	I1216 06:41:18.120888 1639474 logs.go:282] 0 containers: []
	W1216 06:41:18.120895 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:18.120903 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:18.120913 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:18.188407 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:18.188429 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:18.221279 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:18.221295 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:18.288107 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:18.288129 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:18.303324 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:18.303342 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:18.371049 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:18.362924   14025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:18.363610   14025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:18.365170   14025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:18.365582   14025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:18.367111   14025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:18.362924   14025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:18.363610   14025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:18.365170   14025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:18.365582   14025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:18.367111   14025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:20.871320 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:20.881458 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:20.881519 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:20.910690 1639474 cri.go:89] found id: ""
	I1216 06:41:20.910704 1639474 logs.go:282] 0 containers: []
	W1216 06:41:20.910711 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:20.910716 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:20.910778 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:20.940115 1639474 cri.go:89] found id: ""
	I1216 06:41:20.940131 1639474 logs.go:282] 0 containers: []
	W1216 06:41:20.940138 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:20.940144 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:20.940205 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:20.971890 1639474 cri.go:89] found id: ""
	I1216 06:41:20.971904 1639474 logs.go:282] 0 containers: []
	W1216 06:41:20.971911 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:20.971916 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:20.971973 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:20.997611 1639474 cri.go:89] found id: ""
	I1216 06:41:20.997627 1639474 logs.go:282] 0 containers: []
	W1216 06:41:20.997634 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:20.997639 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:20.997714 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:21.028905 1639474 cri.go:89] found id: ""
	I1216 06:41:21.028919 1639474 logs.go:282] 0 containers: []
	W1216 06:41:21.028926 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:21.028931 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:21.028990 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:21.055176 1639474 cri.go:89] found id: ""
	I1216 06:41:21.055190 1639474 logs.go:282] 0 containers: []
	W1216 06:41:21.055197 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:21.055202 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:21.055262 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:21.081697 1639474 cri.go:89] found id: ""
	I1216 06:41:21.081712 1639474 logs.go:282] 0 containers: []
	W1216 06:41:21.081719 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:21.081727 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:21.081738 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:21.148234 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:21.148255 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:21.164172 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:21.164192 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:21.228352 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:21.219814   14118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:21.220709   14118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:21.222449   14118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:21.222766   14118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:21.224337   14118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:21.219814   14118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:21.220709   14118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:21.222449   14118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:21.222766   14118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:21.224337   14118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:21.228362 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:21.228374 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:21.295358 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:21.295378 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:23.826021 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:23.836732 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:23.836794 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:23.865987 1639474 cri.go:89] found id: ""
	I1216 06:41:23.866001 1639474 logs.go:282] 0 containers: []
	W1216 06:41:23.866008 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:23.866013 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:23.866073 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:23.891393 1639474 cri.go:89] found id: ""
	I1216 06:41:23.891408 1639474 logs.go:282] 0 containers: []
	W1216 06:41:23.891415 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:23.891420 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:23.891486 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:23.918388 1639474 cri.go:89] found id: ""
	I1216 06:41:23.918403 1639474 logs.go:282] 0 containers: []
	W1216 06:41:23.918410 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:23.918415 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:23.918475 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:23.961374 1639474 cri.go:89] found id: ""
	I1216 06:41:23.961390 1639474 logs.go:282] 0 containers: []
	W1216 06:41:23.961397 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:23.961402 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:23.961461 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:23.987162 1639474 cri.go:89] found id: ""
	I1216 06:41:23.987176 1639474 logs.go:282] 0 containers: []
	W1216 06:41:23.987184 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:23.987195 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:23.987257 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:24.016111 1639474 cri.go:89] found id: ""
	I1216 06:41:24.016127 1639474 logs.go:282] 0 containers: []
	W1216 06:41:24.016134 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:24.016139 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:24.016202 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:24.043481 1639474 cri.go:89] found id: ""
	I1216 06:41:24.043495 1639474 logs.go:282] 0 containers: []
	W1216 06:41:24.043503 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:24.043511 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:24.043521 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:24.111316 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:24.102100   14216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:24.103028   14216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:24.105013   14216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:24.105610   14216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:24.107298   14216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:24.102100   14216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:24.103028   14216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:24.105013   14216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:24.105610   14216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:24.107298   14216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:24.111326 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:24.111338 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:24.178630 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:24.178650 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:24.213388 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:24.213405 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:24.283269 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:24.283290 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:26.798616 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:26.808720 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:26.808786 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:26.834419 1639474 cri.go:89] found id: ""
	I1216 06:41:26.834433 1639474 logs.go:282] 0 containers: []
	W1216 06:41:26.834451 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:26.834457 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:26.834530 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:26.860230 1639474 cri.go:89] found id: ""
	I1216 06:41:26.860244 1639474 logs.go:282] 0 containers: []
	W1216 06:41:26.860251 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:26.860256 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:26.860316 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:26.886841 1639474 cri.go:89] found id: ""
	I1216 06:41:26.886856 1639474 logs.go:282] 0 containers: []
	W1216 06:41:26.886863 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:26.886868 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:26.886934 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:26.933097 1639474 cri.go:89] found id: ""
	I1216 06:41:26.933121 1639474 logs.go:282] 0 containers: []
	W1216 06:41:26.933129 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:26.933134 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:26.933201 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:26.967219 1639474 cri.go:89] found id: ""
	I1216 06:41:26.967233 1639474 logs.go:282] 0 containers: []
	W1216 06:41:26.967241 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:26.967258 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:26.967319 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:27.008045 1639474 cri.go:89] found id: ""
	I1216 06:41:27.008074 1639474 logs.go:282] 0 containers: []
	W1216 06:41:27.008082 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:27.008088 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:27.008156 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:27.034453 1639474 cri.go:89] found id: ""
	I1216 06:41:27.034469 1639474 logs.go:282] 0 containers: []
	W1216 06:41:27.034476 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:27.034484 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:27.034507 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:27.104223 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:27.104245 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:27.119468 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:27.119487 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:27.188973 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:27.180080   14327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:27.181032   14327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:27.182948   14327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:27.183274   14327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:27.184949   14327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:27.180080   14327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:27.181032   14327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:27.182948   14327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:27.183274   14327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:27.184949   14327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:27.188983 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:27.188994 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:27.258008 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:27.258028 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:29.786955 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:29.797122 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:29.797184 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:29.824207 1639474 cri.go:89] found id: ""
	I1216 06:41:29.824221 1639474 logs.go:282] 0 containers: []
	W1216 06:41:29.824228 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:29.824233 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:29.824290 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:29.850615 1639474 cri.go:89] found id: ""
	I1216 06:41:29.850630 1639474 logs.go:282] 0 containers: []
	W1216 06:41:29.850636 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:29.850641 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:29.850703 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:29.876387 1639474 cri.go:89] found id: ""
	I1216 06:41:29.876401 1639474 logs.go:282] 0 containers: []
	W1216 06:41:29.876408 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:29.876413 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:29.876498 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:29.907653 1639474 cri.go:89] found id: ""
	I1216 06:41:29.907667 1639474 logs.go:282] 0 containers: []
	W1216 06:41:29.907674 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:29.907678 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:29.907735 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:29.944219 1639474 cri.go:89] found id: ""
	I1216 06:41:29.944233 1639474 logs.go:282] 0 containers: []
	W1216 06:41:29.944239 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:29.944244 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:29.944302 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:29.976007 1639474 cri.go:89] found id: ""
	I1216 06:41:29.976021 1639474 logs.go:282] 0 containers: []
	W1216 06:41:29.976029 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:29.976033 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:29.976095 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:30.024272 1639474 cri.go:89] found id: ""
	I1216 06:41:30.024289 1639474 logs.go:282] 0 containers: []
	W1216 06:41:30.024297 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:30.024306 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:30.024322 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:30.119806 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:30.119827 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:30.136379 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:30.136400 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:30.205690 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:30.196345   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:30.197016   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:30.198788   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:30.199535   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:30.201508   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:30.196345   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:30.197016   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:30.198788   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:30.199535   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:30.201508   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:30.205700 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:30.205723 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:30.274216 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:30.274240 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:32.809139 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:32.819371 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:32.819431 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:32.847039 1639474 cri.go:89] found id: ""
	I1216 06:41:32.847054 1639474 logs.go:282] 0 containers: []
	W1216 06:41:32.847065 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:32.847070 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:32.847138 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:32.875215 1639474 cri.go:89] found id: ""
	I1216 06:41:32.875229 1639474 logs.go:282] 0 containers: []
	W1216 06:41:32.875236 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:32.875240 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:32.875300 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:32.907300 1639474 cri.go:89] found id: ""
	I1216 06:41:32.907314 1639474 logs.go:282] 0 containers: []
	W1216 06:41:32.907321 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:32.907326 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:32.907381 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:32.938860 1639474 cri.go:89] found id: ""
	I1216 06:41:32.938874 1639474 logs.go:282] 0 containers: []
	W1216 06:41:32.938881 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:32.938886 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:32.938942 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:32.971352 1639474 cri.go:89] found id: ""
	I1216 06:41:32.971366 1639474 logs.go:282] 0 containers: []
	W1216 06:41:32.971374 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:32.971379 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:32.971436 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:33.012516 1639474 cri.go:89] found id: ""
	I1216 06:41:33.012531 1639474 logs.go:282] 0 containers: []
	W1216 06:41:33.012538 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:33.012543 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:33.012622 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:33.041830 1639474 cri.go:89] found id: ""
	I1216 06:41:33.041844 1639474 logs.go:282] 0 containers: []
	W1216 06:41:33.041851 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:33.041859 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:33.041869 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:33.107636 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:33.107656 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:33.122787 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:33.122803 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:33.191649 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:33.182880   14537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:33.183594   14537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:33.185187   14537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:33.185934   14537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:33.187632   14537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:33.182880   14537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:33.183594   14537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:33.185187   14537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:33.185934   14537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:33.187632   14537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:33.191659 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:33.191682 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:33.263447 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:33.263474 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:35.794998 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:35.805176 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:35.805236 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:35.831135 1639474 cri.go:89] found id: ""
	I1216 06:41:35.831149 1639474 logs.go:282] 0 containers: []
	W1216 06:41:35.831156 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:35.831161 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:35.831223 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:35.860254 1639474 cri.go:89] found id: ""
	I1216 06:41:35.860281 1639474 logs.go:282] 0 containers: []
	W1216 06:41:35.860289 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:35.860294 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:35.860360 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:35.887306 1639474 cri.go:89] found id: ""
	I1216 06:41:35.887320 1639474 logs.go:282] 0 containers: []
	W1216 06:41:35.887327 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:35.887333 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:35.887391 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:35.917653 1639474 cri.go:89] found id: ""
	I1216 06:41:35.917668 1639474 logs.go:282] 0 containers: []
	W1216 06:41:35.917690 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:35.917696 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:35.917763 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:35.959523 1639474 cri.go:89] found id: ""
	I1216 06:41:35.959546 1639474 logs.go:282] 0 containers: []
	W1216 06:41:35.959553 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:35.959558 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:35.959629 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:35.989044 1639474 cri.go:89] found id: ""
	I1216 06:41:35.989062 1639474 logs.go:282] 0 containers: []
	W1216 06:41:35.989069 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:35.989077 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:35.989138 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:36.024859 1639474 cri.go:89] found id: ""
	I1216 06:41:36.024875 1639474 logs.go:282] 0 containers: []
	W1216 06:41:36.024885 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:36.024895 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:36.024912 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:36.056878 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:36.056896 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:36.121811 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:36.121834 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:36.137437 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:36.137455 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:36.205908 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:36.196720   14657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:36.197549   14657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:36.199375   14657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:36.199759   14657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:36.201454   14657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:36.196720   14657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:36.197549   14657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:36.199375   14657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:36.199759   14657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:36.201454   14657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:36.205920 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:36.205931 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:38.776930 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:38.786842 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:38.786902 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:38.812622 1639474 cri.go:89] found id: ""
	I1216 06:41:38.812637 1639474 logs.go:282] 0 containers: []
	W1216 06:41:38.812644 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:38.812649 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:38.812705 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:38.838434 1639474 cri.go:89] found id: ""
	I1216 06:41:38.838448 1639474 logs.go:282] 0 containers: []
	W1216 06:41:38.838456 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:38.838461 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:38.838523 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:38.863392 1639474 cri.go:89] found id: ""
	I1216 06:41:38.863407 1639474 logs.go:282] 0 containers: []
	W1216 06:41:38.863414 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:38.863419 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:38.863479 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:38.888908 1639474 cri.go:89] found id: ""
	I1216 06:41:38.888922 1639474 logs.go:282] 0 containers: []
	W1216 06:41:38.888929 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:38.888934 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:38.888993 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:38.917217 1639474 cri.go:89] found id: ""
	I1216 06:41:38.917247 1639474 logs.go:282] 0 containers: []
	W1216 06:41:38.917255 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:38.917260 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:38.917340 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:38.951610 1639474 cri.go:89] found id: ""
	I1216 06:41:38.951623 1639474 logs.go:282] 0 containers: []
	W1216 06:41:38.951630 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:38.951645 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:38.951706 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:38.982144 1639474 cri.go:89] found id: ""
	I1216 06:41:38.982158 1639474 logs.go:282] 0 containers: []
	W1216 06:41:38.982165 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:38.982173 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:38.982184 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:39.051829 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:39.043703   14748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:39.044349   14748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:39.045933   14748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:39.046368   14748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:39.047868   14748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:39.043703   14748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:39.044349   14748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:39.045933   14748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:39.046368   14748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:39.047868   14748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:39.051839 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:39.051860 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:39.125701 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:39.125723 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:39.157087 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:39.157104 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:39.225477 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:39.225498 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:41.740919 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:41.751149 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:41.751211 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:41.776245 1639474 cri.go:89] found id: ""
	I1216 06:41:41.776259 1639474 logs.go:282] 0 containers: []
	W1216 06:41:41.776266 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:41.776271 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:41.776330 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:41.801530 1639474 cri.go:89] found id: ""
	I1216 06:41:41.801543 1639474 logs.go:282] 0 containers: []
	W1216 06:41:41.801556 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:41.801561 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:41.801619 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:41.826287 1639474 cri.go:89] found id: ""
	I1216 06:41:41.826300 1639474 logs.go:282] 0 containers: []
	W1216 06:41:41.826307 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:41.826312 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:41.826368 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:41.855404 1639474 cri.go:89] found id: ""
	I1216 06:41:41.855419 1639474 logs.go:282] 0 containers: []
	W1216 06:41:41.855426 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:41.855431 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:41.855490 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:41.883079 1639474 cri.go:89] found id: ""
	I1216 06:41:41.883093 1639474 logs.go:282] 0 containers: []
	W1216 06:41:41.883100 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:41.883104 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:41.883162 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:41.924362 1639474 cri.go:89] found id: ""
	I1216 06:41:41.924376 1639474 logs.go:282] 0 containers: []
	W1216 06:41:41.924393 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:41.924399 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:41.924503 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:41.958054 1639474 cri.go:89] found id: ""
	I1216 06:41:41.958069 1639474 logs.go:282] 0 containers: []
	W1216 06:41:41.958076 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:41.958083 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:41.958093 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:42.031093 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:42.022513   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:42.023465   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:42.024526   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:42.025029   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:42.026770   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:42.022513   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:42.023465   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:42.024526   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:42.025029   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:42.026770   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:42.031104 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:42.031117 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:42.098938 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:42.098961 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:42.132662 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:42.132681 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:42.206635 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:42.206658 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:44.725533 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:44.735690 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:44.735751 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:44.764539 1639474 cri.go:89] found id: ""
	I1216 06:41:44.764554 1639474 logs.go:282] 0 containers: []
	W1216 06:41:44.764561 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:44.764566 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:44.764624 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:44.789462 1639474 cri.go:89] found id: ""
	I1216 06:41:44.789476 1639474 logs.go:282] 0 containers: []
	W1216 06:41:44.789483 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:44.789487 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:44.789550 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:44.813863 1639474 cri.go:89] found id: ""
	I1216 06:41:44.813877 1639474 logs.go:282] 0 containers: []
	W1216 06:41:44.813884 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:44.813889 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:44.813948 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:44.842990 1639474 cri.go:89] found id: ""
	I1216 06:41:44.843006 1639474 logs.go:282] 0 containers: []
	W1216 06:41:44.843013 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:44.843018 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:44.843076 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:44.868986 1639474 cri.go:89] found id: ""
	I1216 06:41:44.869000 1639474 logs.go:282] 0 containers: []
	W1216 06:41:44.869006 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:44.869013 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:44.869070 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:44.897735 1639474 cri.go:89] found id: ""
	I1216 06:41:44.897759 1639474 logs.go:282] 0 containers: []
	W1216 06:41:44.897767 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:44.897773 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:44.897840 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:44.927690 1639474 cri.go:89] found id: ""
	I1216 06:41:44.927715 1639474 logs.go:282] 0 containers: []
	W1216 06:41:44.927722 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:44.927730 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:44.927740 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:45.002166 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:45.002190 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:45.029027 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:45.029047 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:45.167411 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:45.147237   14960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:45.148177   14960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:45.151868   14960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:45.153460   14960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:45.154056   14960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:45.147237   14960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:45.148177   14960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:45.151868   14960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:45.153460   14960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:45.154056   14960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:45.167428 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:45.167448 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:45.247049 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:45.247076 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:47.787199 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:47.797629 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:47.797694 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:47.822803 1639474 cri.go:89] found id: ""
	I1216 06:41:47.822818 1639474 logs.go:282] 0 containers: []
	W1216 06:41:47.822825 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:47.822830 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:47.822894 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:47.848082 1639474 cri.go:89] found id: ""
	I1216 06:41:47.848109 1639474 logs.go:282] 0 containers: []
	W1216 06:41:47.848117 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:47.848122 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:47.848199 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:47.874407 1639474 cri.go:89] found id: ""
	I1216 06:41:47.874421 1639474 logs.go:282] 0 containers: []
	W1216 06:41:47.874428 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:47.874434 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:47.874495 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:47.908568 1639474 cri.go:89] found id: ""
	I1216 06:41:47.908604 1639474 logs.go:282] 0 containers: []
	W1216 06:41:47.908611 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:47.908617 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:47.908685 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:47.942423 1639474 cri.go:89] found id: ""
	I1216 06:41:47.942438 1639474 logs.go:282] 0 containers: []
	W1216 06:41:47.942445 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:47.942450 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:47.942518 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:47.977188 1639474 cri.go:89] found id: ""
	I1216 06:41:47.977210 1639474 logs.go:282] 0 containers: []
	W1216 06:41:47.977218 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:47.977223 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:47.977302 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:48.011589 1639474 cri.go:89] found id: ""
	I1216 06:41:48.011604 1639474 logs.go:282] 0 containers: []
	W1216 06:41:48.011623 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:48.011637 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:48.011649 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:48.090336 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:48.090357 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:48.106676 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:48.106693 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:48.174952 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:48.165517   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:48.166169   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:48.168421   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:48.169339   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:48.170443   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:48.165517   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:48.166169   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:48.168421   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:48.169339   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:48.170443   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:48.174963 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:48.174975 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:48.244365 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:48.244386 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:50.777766 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:50.790374 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:50.790436 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:50.817848 1639474 cri.go:89] found id: ""
	I1216 06:41:50.817863 1639474 logs.go:282] 0 containers: []
	W1216 06:41:50.817870 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:50.817875 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:50.817947 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:50.848261 1639474 cri.go:89] found id: ""
	I1216 06:41:50.848277 1639474 logs.go:282] 0 containers: []
	W1216 06:41:50.848285 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:50.848290 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:50.848357 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:50.875745 1639474 cri.go:89] found id: ""
	I1216 06:41:50.875771 1639474 logs.go:282] 0 containers: []
	W1216 06:41:50.875779 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:50.875784 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:50.875857 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:50.908128 1639474 cri.go:89] found id: ""
	I1216 06:41:50.908142 1639474 logs.go:282] 0 containers: []
	W1216 06:41:50.908149 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:50.908154 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:50.908216 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:50.945866 1639474 cri.go:89] found id: ""
	I1216 06:41:50.945880 1639474 logs.go:282] 0 containers: []
	W1216 06:41:50.945897 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:50.945906 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:50.945988 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:50.976758 1639474 cri.go:89] found id: ""
	I1216 06:41:50.976772 1639474 logs.go:282] 0 containers: []
	W1216 06:41:50.976779 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:50.976790 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:50.976862 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:51.012047 1639474 cri.go:89] found id: ""
	I1216 06:41:51.012061 1639474 logs.go:282] 0 containers: []
	W1216 06:41:51.012080 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:51.012088 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:51.012099 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:51.079840 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:51.079863 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:51.095967 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:51.095984 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:51.168911 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:51.158269   15168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:51.159160   15168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:51.161023   15168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:51.161808   15168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:51.163880   15168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:51.158269   15168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:51.159160   15168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:51.161023   15168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:51.161808   15168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:51.163880   15168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:51.168920 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:51.168932 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:51.241258 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:51.241281 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:53.774859 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:53.785580 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:53.785647 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:53.815910 1639474 cri.go:89] found id: ""
	I1216 06:41:53.815946 1639474 logs.go:282] 0 containers: []
	W1216 06:41:53.815954 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:53.815960 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:53.816034 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:53.843197 1639474 cri.go:89] found id: ""
	I1216 06:41:53.843220 1639474 logs.go:282] 0 containers: []
	W1216 06:41:53.843228 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:53.843233 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:53.843303 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:53.869584 1639474 cri.go:89] found id: ""
	I1216 06:41:53.869598 1639474 logs.go:282] 0 containers: []
	W1216 06:41:53.869605 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:53.869610 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:53.869672 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:53.898126 1639474 cri.go:89] found id: ""
	I1216 06:41:53.898141 1639474 logs.go:282] 0 containers: []
	W1216 06:41:53.898148 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:53.898154 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:53.898217 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:53.935008 1639474 cri.go:89] found id: ""
	I1216 06:41:53.935022 1639474 logs.go:282] 0 containers: []
	W1216 06:41:53.935029 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:53.935033 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:53.935094 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:53.971715 1639474 cri.go:89] found id: ""
	I1216 06:41:53.971729 1639474 logs.go:282] 0 containers: []
	W1216 06:41:53.971740 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:53.971745 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:53.971827 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:54.004089 1639474 cri.go:89] found id: ""
	I1216 06:41:54.004107 1639474 logs.go:282] 0 containers: []
	W1216 06:41:54.004115 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:54.004138 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:54.004151 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:54.072434 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:54.072455 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:54.088417 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:54.088436 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:54.154720 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:54.146355   15274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:54.146923   15274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:54.148518   15274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:54.149322   15274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:54.150888   15274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:54.146355   15274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:54.146923   15274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:54.148518   15274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:54.149322   15274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:54.150888   15274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:54.154730 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:54.154741 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:54.223744 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:54.223763 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:56.753558 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:56.764118 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:56.764182 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:56.789865 1639474 cri.go:89] found id: ""
	I1216 06:41:56.789879 1639474 logs.go:282] 0 containers: []
	W1216 06:41:56.789886 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:56.789891 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:56.789954 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:56.815375 1639474 cri.go:89] found id: ""
	I1216 06:41:56.815390 1639474 logs.go:282] 0 containers: []
	W1216 06:41:56.815396 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:56.815401 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:56.815458 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:56.843367 1639474 cri.go:89] found id: ""
	I1216 06:41:56.843381 1639474 logs.go:282] 0 containers: []
	W1216 06:41:56.843389 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:56.843394 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:56.843453 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:56.869235 1639474 cri.go:89] found id: ""
	I1216 06:41:56.869249 1639474 logs.go:282] 0 containers: []
	W1216 06:41:56.869263 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:56.869268 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:56.869325 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:56.894296 1639474 cri.go:89] found id: ""
	I1216 06:41:56.894310 1639474 logs.go:282] 0 containers: []
	W1216 06:41:56.894318 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:56.894323 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:56.894393 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:56.930771 1639474 cri.go:89] found id: ""
	I1216 06:41:56.930786 1639474 logs.go:282] 0 containers: []
	W1216 06:41:56.930795 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:56.930800 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:56.930877 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:56.961829 1639474 cri.go:89] found id: ""
	I1216 06:41:56.961855 1639474 logs.go:282] 0 containers: []
	W1216 06:41:56.961862 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:56.961869 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:56.961880 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:56.982515 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:56.982532 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:57.053403 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:57.042504   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:57.043169   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:57.044928   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:57.047094   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:57.047728   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:57.042504   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:57.043169   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:57.044928   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:57.047094   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:57.047728   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:57.053413 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:57.053424 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:57.122315 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:57.122338 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:57.151668 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:57.151684 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:59.721370 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:59.731285 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:59.731355 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:59.759821 1639474 cri.go:89] found id: ""
	I1216 06:41:59.759835 1639474 logs.go:282] 0 containers: []
	W1216 06:41:59.759843 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:59.759848 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:59.759905 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:59.784708 1639474 cri.go:89] found id: ""
	I1216 06:41:59.784721 1639474 logs.go:282] 0 containers: []
	W1216 06:41:59.784728 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:59.784733 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:59.784791 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:59.810181 1639474 cri.go:89] found id: ""
	I1216 06:41:59.810196 1639474 logs.go:282] 0 containers: []
	W1216 06:41:59.810204 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:59.810209 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:59.810268 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:59.836051 1639474 cri.go:89] found id: ""
	I1216 06:41:59.836072 1639474 logs.go:282] 0 containers: []
	W1216 06:41:59.836082 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:59.836094 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:59.836177 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:59.860701 1639474 cri.go:89] found id: ""
	I1216 06:41:59.860714 1639474 logs.go:282] 0 containers: []
	W1216 06:41:59.860722 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:59.860727 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:59.860786 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:59.885062 1639474 cri.go:89] found id: ""
	I1216 06:41:59.885076 1639474 logs.go:282] 0 containers: []
	W1216 06:41:59.885092 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:59.885098 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:59.885154 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:59.926044 1639474 cri.go:89] found id: ""
	I1216 06:41:59.926058 1639474 logs.go:282] 0 containers: []
	W1216 06:41:59.926065 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:59.926073 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:59.926099 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:00.037850 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:59.990877   15478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:59.991479   15478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:59.993112   15478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:59.993660   15478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:59.995321   15478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:59.990877   15478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:59.991479   15478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:59.993112   15478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:59.993660   15478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:59.995321   15478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:00.037864 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:00.037877 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:00.264777 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:00.264802 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:00.361496 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:00.361518 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:00.460153 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:00.460175 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:02.976790 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:02.987102 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:02.987180 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:03.015111 1639474 cri.go:89] found id: ""
	I1216 06:42:03.015126 1639474 logs.go:282] 0 containers: []
	W1216 06:42:03.015133 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:03.015139 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:03.015202 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:03.040871 1639474 cri.go:89] found id: ""
	I1216 06:42:03.040903 1639474 logs.go:282] 0 containers: []
	W1216 06:42:03.040910 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:03.040915 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:03.040977 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:03.065726 1639474 cri.go:89] found id: ""
	I1216 06:42:03.065740 1639474 logs.go:282] 0 containers: []
	W1216 06:42:03.065748 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:03.065754 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:03.065813 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:03.090951 1639474 cri.go:89] found id: ""
	I1216 06:42:03.090966 1639474 logs.go:282] 0 containers: []
	W1216 06:42:03.090973 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:03.090979 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:03.091037 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:03.119521 1639474 cri.go:89] found id: ""
	I1216 06:42:03.119536 1639474 logs.go:282] 0 containers: []
	W1216 06:42:03.119543 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:03.119549 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:03.119615 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:03.147166 1639474 cri.go:89] found id: ""
	I1216 06:42:03.147181 1639474 logs.go:282] 0 containers: []
	W1216 06:42:03.147188 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:03.147193 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:03.147267 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:03.172021 1639474 cri.go:89] found id: ""
	I1216 06:42:03.172035 1639474 logs.go:282] 0 containers: []
	W1216 06:42:03.172042 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:03.172050 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:03.172060 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:03.186822 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:03.186838 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:03.250765 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:03.242422   15588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:03.243046   15588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:03.244675   15588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:03.245279   15588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:03.246834   15588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:03.242422   15588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:03.243046   15588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:03.244675   15588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:03.245279   15588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:03.246834   15588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:03.250775 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:03.250786 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:03.325562 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:03.325590 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:03.355074 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:03.355093 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:05.922524 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:05.932734 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:05.932804 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:05.960790 1639474 cri.go:89] found id: ""
	I1216 06:42:05.960804 1639474 logs.go:282] 0 containers: []
	W1216 06:42:05.960811 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:05.960816 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:05.960884 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:05.986356 1639474 cri.go:89] found id: ""
	I1216 06:42:05.986386 1639474 logs.go:282] 0 containers: []
	W1216 06:42:05.986394 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:05.986399 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:05.986458 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:06.015030 1639474 cri.go:89] found id: ""
	I1216 06:42:06.015046 1639474 logs.go:282] 0 containers: []
	W1216 06:42:06.015053 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:06.015058 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:06.015119 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:06.041009 1639474 cri.go:89] found id: ""
	I1216 06:42:06.041023 1639474 logs.go:282] 0 containers: []
	W1216 06:42:06.041030 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:06.041035 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:06.041091 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:06.068292 1639474 cri.go:89] found id: ""
	I1216 06:42:06.068306 1639474 logs.go:282] 0 containers: []
	W1216 06:42:06.068314 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:06.068319 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:06.068375 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:06.100555 1639474 cri.go:89] found id: ""
	I1216 06:42:06.100569 1639474 logs.go:282] 0 containers: []
	W1216 06:42:06.100576 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:06.100582 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:06.100642 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:06.132353 1639474 cri.go:89] found id: ""
	I1216 06:42:06.132367 1639474 logs.go:282] 0 containers: []
	W1216 06:42:06.132374 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:06.132382 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:06.132392 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:06.201249 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:06.192521   15689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:06.193141   15689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:06.194767   15689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:06.195329   15689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:06.197078   15689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:06.192521   15689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:06.193141   15689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:06.194767   15689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:06.195329   15689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:06.197078   15689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:06.201259 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:06.201271 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:06.271083 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:06.271102 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:06.300840 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:06.300857 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:06.369023 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:06.369043 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:08.885532 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:08.897655 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:08.897714 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:08.929123 1639474 cri.go:89] found id: ""
	I1216 06:42:08.929137 1639474 logs.go:282] 0 containers: []
	W1216 06:42:08.929144 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:08.929149 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:08.929216 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:08.969020 1639474 cri.go:89] found id: ""
	I1216 06:42:08.969036 1639474 logs.go:282] 0 containers: []
	W1216 06:42:08.969043 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:08.969049 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:08.969107 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:08.995554 1639474 cri.go:89] found id: ""
	I1216 06:42:08.995569 1639474 logs.go:282] 0 containers: []
	W1216 06:42:08.995577 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:08.995582 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:08.995642 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:09.023705 1639474 cri.go:89] found id: ""
	I1216 06:42:09.023720 1639474 logs.go:282] 0 containers: []
	W1216 06:42:09.023727 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:09.023732 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:09.023795 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:09.050243 1639474 cri.go:89] found id: ""
	I1216 06:42:09.050263 1639474 logs.go:282] 0 containers: []
	W1216 06:42:09.050270 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:09.050275 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:09.050332 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:09.075763 1639474 cri.go:89] found id: ""
	I1216 06:42:09.075778 1639474 logs.go:282] 0 containers: []
	W1216 06:42:09.075786 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:09.075791 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:09.075847 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:09.102027 1639474 cri.go:89] found id: ""
	I1216 06:42:09.102042 1639474 logs.go:282] 0 containers: []
	W1216 06:42:09.102050 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:09.102058 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:09.102072 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:09.131304 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:09.131322 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:09.197595 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:09.197616 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:09.214311 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:09.214329 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:09.280261 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:09.272137   15812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:09.272954   15812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:09.274571   15812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:09.274879   15812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:09.276370   15812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:09.272137   15812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:09.272954   15812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:09.274571   15812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:09.274879   15812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:09.276370   15812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:09.280272 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:09.280287 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:11.849647 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:11.859759 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:11.859820 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:11.885934 1639474 cri.go:89] found id: ""
	I1216 06:42:11.885948 1639474 logs.go:282] 0 containers: []
	W1216 06:42:11.885955 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:11.885960 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:11.886024 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:11.915333 1639474 cri.go:89] found id: ""
	I1216 06:42:11.915347 1639474 logs.go:282] 0 containers: []
	W1216 06:42:11.915354 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:11.915359 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:11.915420 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:11.958797 1639474 cri.go:89] found id: ""
	I1216 06:42:11.958811 1639474 logs.go:282] 0 containers: []
	W1216 06:42:11.958818 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:11.958823 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:11.958882 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:11.986843 1639474 cri.go:89] found id: ""
	I1216 06:42:11.986858 1639474 logs.go:282] 0 containers: []
	W1216 06:42:11.986865 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:11.986870 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:11.986928 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:12.016252 1639474 cri.go:89] found id: ""
	I1216 06:42:12.016268 1639474 logs.go:282] 0 containers: []
	W1216 06:42:12.016275 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:12.016280 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:12.016340 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:12.047250 1639474 cri.go:89] found id: ""
	I1216 06:42:12.047264 1639474 logs.go:282] 0 containers: []
	W1216 06:42:12.047271 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:12.047276 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:12.047334 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:12.073692 1639474 cri.go:89] found id: ""
	I1216 06:42:12.073706 1639474 logs.go:282] 0 containers: []
	W1216 06:42:12.073713 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:12.073721 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:12.073732 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:12.137759 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:12.129267   15900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:12.129895   15900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:12.131416   15900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:12.131890   15900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:12.133511   15900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:12.129267   15900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:12.129895   15900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:12.131416   15900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:12.131890   15900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:12.133511   15900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:12.137769 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:12.137780 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:12.206794 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:12.206815 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:12.235894 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:12.235910 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:12.304248 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:12.304267 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:14.819229 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:14.829519 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:14.829579 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:14.854644 1639474 cri.go:89] found id: ""
	I1216 06:42:14.854658 1639474 logs.go:282] 0 containers: []
	W1216 06:42:14.854665 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:14.854670 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:14.854744 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:14.879759 1639474 cri.go:89] found id: ""
	I1216 06:42:14.879774 1639474 logs.go:282] 0 containers: []
	W1216 06:42:14.879781 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:14.879785 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:14.879846 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:14.914620 1639474 cri.go:89] found id: ""
	I1216 06:42:14.914633 1639474 logs.go:282] 0 containers: []
	W1216 06:42:14.914640 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:14.914645 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:14.914706 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:14.949457 1639474 cri.go:89] found id: ""
	I1216 06:42:14.949470 1639474 logs.go:282] 0 containers: []
	W1216 06:42:14.949477 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:14.949482 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:14.949539 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:14.978393 1639474 cri.go:89] found id: ""
	I1216 06:42:14.978407 1639474 logs.go:282] 0 containers: []
	W1216 06:42:14.978414 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:14.978419 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:14.978485 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:15.059438 1639474 cri.go:89] found id: ""
	I1216 06:42:15.059454 1639474 logs.go:282] 0 containers: []
	W1216 06:42:15.059468 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:15.059474 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:15.059560 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:15.087173 1639474 cri.go:89] found id: ""
	I1216 06:42:15.087188 1639474 logs.go:282] 0 containers: []
	W1216 06:42:15.087194 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:15.087202 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:15.087212 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:15.157589 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:15.157610 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:15.187757 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:15.187774 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:15.256722 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:15.256742 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:15.271447 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:15.271464 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:15.332113 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:15.323890   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:15.324640   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:15.325693   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:15.326204   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:15.327840   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:15.323890   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:15.324640   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:15.325693   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:15.326204   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:15.327840   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:17.832401 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:17.842950 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:17.843012 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:17.871468 1639474 cri.go:89] found id: ""
	I1216 06:42:17.871483 1639474 logs.go:282] 0 containers: []
	W1216 06:42:17.871490 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:17.871496 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:17.871554 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:17.904274 1639474 cri.go:89] found id: ""
	I1216 06:42:17.904288 1639474 logs.go:282] 0 containers: []
	W1216 06:42:17.904295 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:17.904299 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:17.904355 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:17.936320 1639474 cri.go:89] found id: ""
	I1216 06:42:17.936334 1639474 logs.go:282] 0 containers: []
	W1216 06:42:17.936341 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:17.936346 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:17.936403 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:17.967750 1639474 cri.go:89] found id: ""
	I1216 06:42:17.967764 1639474 logs.go:282] 0 containers: []
	W1216 06:42:17.967771 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:17.967775 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:17.967833 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:17.993994 1639474 cri.go:89] found id: ""
	I1216 06:42:17.994008 1639474 logs.go:282] 0 containers: []
	W1216 06:42:17.994016 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:17.994021 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:17.994085 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:18.021367 1639474 cri.go:89] found id: ""
	I1216 06:42:18.021382 1639474 logs.go:282] 0 containers: []
	W1216 06:42:18.021390 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:18.021395 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:18.021463 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:18.052100 1639474 cri.go:89] found id: ""
	I1216 06:42:18.052115 1639474 logs.go:282] 0 containers: []
	W1216 06:42:18.052122 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:18.052130 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:18.052141 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:18.117261 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:18.117282 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:18.132219 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:18.132235 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:18.198118 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:18.189377   16116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:18.189937   16116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:18.191659   16116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:18.192181   16116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:18.193769   16116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:18.189377   16116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:18.189937   16116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:18.191659   16116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:18.192181   16116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:18.193769   16116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:18.198128 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:18.198139 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:18.265118 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:18.265138 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:20.794027 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:20.803718 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:20.803782 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:20.828191 1639474 cri.go:89] found id: ""
	I1216 06:42:20.828205 1639474 logs.go:282] 0 containers: []
	W1216 06:42:20.828212 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:20.828217 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:20.828278 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:20.853411 1639474 cri.go:89] found id: ""
	I1216 06:42:20.853425 1639474 logs.go:282] 0 containers: []
	W1216 06:42:20.853432 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:20.853437 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:20.853499 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:20.877825 1639474 cri.go:89] found id: ""
	I1216 06:42:20.877841 1639474 logs.go:282] 0 containers: []
	W1216 06:42:20.877848 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:20.877853 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:20.877908 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:20.910891 1639474 cri.go:89] found id: ""
	I1216 06:42:20.910904 1639474 logs.go:282] 0 containers: []
	W1216 06:42:20.910911 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:20.910916 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:20.910973 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:20.941025 1639474 cri.go:89] found id: ""
	I1216 06:42:20.941039 1639474 logs.go:282] 0 containers: []
	W1216 06:42:20.941045 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:20.941050 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:20.941108 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:20.973633 1639474 cri.go:89] found id: ""
	I1216 06:42:20.973647 1639474 logs.go:282] 0 containers: []
	W1216 06:42:20.973654 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:20.973659 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:20.973714 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:21.002805 1639474 cri.go:89] found id: ""
	I1216 06:42:21.002821 1639474 logs.go:282] 0 containers: []
	W1216 06:42:21.002828 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:21.002837 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:21.002849 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:21.068941 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:21.068961 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:21.083829 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:21.083853 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:21.147337 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:21.139664   16218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:21.140092   16218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:21.141644   16218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:21.141963   16218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:21.143475   16218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:21.139664   16218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:21.140092   16218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:21.141644   16218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:21.141963   16218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:21.143475   16218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:21.147347 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:21.147359 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:21.215583 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:21.215604 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:23.745376 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:23.755709 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:23.755771 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:23.781141 1639474 cri.go:89] found id: ""
	I1216 06:42:23.781155 1639474 logs.go:282] 0 containers: []
	W1216 06:42:23.781162 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:23.781168 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:23.781234 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:23.811661 1639474 cri.go:89] found id: ""
	I1216 06:42:23.811675 1639474 logs.go:282] 0 containers: []
	W1216 06:42:23.811683 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:23.811687 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:23.811745 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:23.837608 1639474 cri.go:89] found id: ""
	I1216 06:42:23.837623 1639474 logs.go:282] 0 containers: []
	W1216 06:42:23.837630 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:23.837635 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:23.837694 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:23.864015 1639474 cri.go:89] found id: ""
	I1216 06:42:23.864041 1639474 logs.go:282] 0 containers: []
	W1216 06:42:23.864051 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:23.864057 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:23.864124 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:23.889789 1639474 cri.go:89] found id: ""
	I1216 06:42:23.889806 1639474 logs.go:282] 0 containers: []
	W1216 06:42:23.889813 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:23.889818 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:23.889877 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:23.918576 1639474 cri.go:89] found id: ""
	I1216 06:42:23.918590 1639474 logs.go:282] 0 containers: []
	W1216 06:42:23.918598 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:23.918603 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:23.918661 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:23.950516 1639474 cri.go:89] found id: ""
	I1216 06:42:23.950531 1639474 logs.go:282] 0 containers: []
	W1216 06:42:23.950537 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:23.950545 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:23.950555 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:23.980911 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:23.980928 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:24.047333 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:24.047355 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:24.063020 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:24.063037 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:24.131565 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:24.123164   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:24.124006   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:24.125798   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:24.126123   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:24.127396   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:24.123164   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:24.124006   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:24.125798   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:24.126123   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:24.127396   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:24.131574 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:24.131593 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:26.704797 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:26.715064 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:26.715144 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:26.741016 1639474 cri.go:89] found id: ""
	I1216 06:42:26.741030 1639474 logs.go:282] 0 containers: []
	W1216 06:42:26.741037 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:26.741043 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:26.741102 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:26.771178 1639474 cri.go:89] found id: ""
	I1216 06:42:26.771192 1639474 logs.go:282] 0 containers: []
	W1216 06:42:26.771200 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:26.771205 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:26.771263 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:26.796426 1639474 cri.go:89] found id: ""
	I1216 06:42:26.796440 1639474 logs.go:282] 0 containers: []
	W1216 06:42:26.796447 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:26.796452 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:26.796530 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:26.822428 1639474 cri.go:89] found id: ""
	I1216 06:42:26.822444 1639474 logs.go:282] 0 containers: []
	W1216 06:42:26.822451 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:26.822456 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:26.822512 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:26.855530 1639474 cri.go:89] found id: ""
	I1216 06:42:26.855545 1639474 logs.go:282] 0 containers: []
	W1216 06:42:26.855552 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:26.855557 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:26.855617 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:26.880135 1639474 cri.go:89] found id: ""
	I1216 06:42:26.880149 1639474 logs.go:282] 0 containers: []
	W1216 06:42:26.880156 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:26.880161 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:26.880219 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:26.917307 1639474 cri.go:89] found id: ""
	I1216 06:42:26.917321 1639474 logs.go:282] 0 containers: []
	W1216 06:42:26.917327 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:26.917335 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:26.917347 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:26.997666 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:26.997690 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:27.033638 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:27.033662 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:27.104861 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:27.104880 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:27.119683 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:27.119699 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:27.187945 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:27.180063   16436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:27.180759   16436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:27.182494   16436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:27.183036   16436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:27.184032   16436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:27.180063   16436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:27.180759   16436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:27.182494   16436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:27.183036   16436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:27.184032   16436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:29.688270 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:29.698566 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:29.698629 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:29.724518 1639474 cri.go:89] found id: ""
	I1216 06:42:29.724532 1639474 logs.go:282] 0 containers: []
	W1216 06:42:29.724539 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:29.724544 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:29.724605 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:29.749436 1639474 cri.go:89] found id: ""
	I1216 06:42:29.749451 1639474 logs.go:282] 0 containers: []
	W1216 06:42:29.749458 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:29.749463 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:29.749525 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:29.774261 1639474 cri.go:89] found id: ""
	I1216 06:42:29.774276 1639474 logs.go:282] 0 containers: []
	W1216 06:42:29.774283 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:29.774290 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:29.774349 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:29.799455 1639474 cri.go:89] found id: ""
	I1216 06:42:29.799469 1639474 logs.go:282] 0 containers: []
	W1216 06:42:29.799478 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:29.799483 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:29.799541 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:29.823692 1639474 cri.go:89] found id: ""
	I1216 06:42:29.823707 1639474 logs.go:282] 0 containers: []
	W1216 06:42:29.823714 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:29.823718 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:29.823784 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:29.851131 1639474 cri.go:89] found id: ""
	I1216 06:42:29.851156 1639474 logs.go:282] 0 containers: []
	W1216 06:42:29.851164 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:29.851169 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:29.851239 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:29.875892 1639474 cri.go:89] found id: ""
	I1216 06:42:29.875906 1639474 logs.go:282] 0 containers: []
	W1216 06:42:29.875923 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:29.875931 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:29.875942 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:29.949752 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:29.949772 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:29.966843 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:29.966860 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:30.075177 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:30.040929   16528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:30.058446   16528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:30.059998   16528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:30.060413   16528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:30.066181   16528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:30.040929   16528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:30.058446   16528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:30.059998   16528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:30.060413   16528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:30.066181   16528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:30.075189 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:30.075201 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:30.153503 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:30.153525 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:32.683959 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:32.695552 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:32.695611 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:32.719250 1639474 cri.go:89] found id: ""
	I1216 06:42:32.719264 1639474 logs.go:282] 0 containers: []
	W1216 06:42:32.719271 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:32.719276 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:32.719335 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:32.744437 1639474 cri.go:89] found id: ""
	I1216 06:42:32.744451 1639474 logs.go:282] 0 containers: []
	W1216 06:42:32.744459 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:32.744464 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:32.744568 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:32.772181 1639474 cri.go:89] found id: ""
	I1216 06:42:32.772196 1639474 logs.go:282] 0 containers: []
	W1216 06:42:32.772204 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:32.772209 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:32.772273 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:32.799021 1639474 cri.go:89] found id: ""
	I1216 06:42:32.799035 1639474 logs.go:282] 0 containers: []
	W1216 06:42:32.799041 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:32.799046 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:32.799103 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:32.826452 1639474 cri.go:89] found id: ""
	I1216 06:42:32.826466 1639474 logs.go:282] 0 containers: []
	W1216 06:42:32.826473 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:32.826478 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:32.826535 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:32.854867 1639474 cri.go:89] found id: ""
	I1216 06:42:32.854881 1639474 logs.go:282] 0 containers: []
	W1216 06:42:32.854888 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:32.854893 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:32.854953 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:32.883584 1639474 cri.go:89] found id: ""
	I1216 06:42:32.883608 1639474 logs.go:282] 0 containers: []
	W1216 06:42:32.883615 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:32.883624 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:32.883635 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:32.969443 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:32.969472 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:33.000330 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:33.000354 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:33.068289 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:33.068311 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:33.083127 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:33.083145 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:33.154304 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:33.145404   16644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:33.146150   16644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:33.147831   16644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:33.148385   16644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:33.150308   16644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:33.145404   16644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:33.146150   16644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:33.147831   16644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:33.148385   16644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:33.150308   16644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:35.655139 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:35.665534 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:35.665616 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:35.691995 1639474 cri.go:89] found id: ""
	I1216 06:42:35.692009 1639474 logs.go:282] 0 containers: []
	W1216 06:42:35.692016 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:35.692021 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:35.692079 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:35.718728 1639474 cri.go:89] found id: ""
	I1216 06:42:35.718742 1639474 logs.go:282] 0 containers: []
	W1216 06:42:35.718748 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:35.718753 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:35.718812 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:35.743314 1639474 cri.go:89] found id: ""
	I1216 06:42:35.743328 1639474 logs.go:282] 0 containers: []
	W1216 06:42:35.743334 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:35.743339 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:35.743400 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:35.767871 1639474 cri.go:89] found id: ""
	I1216 06:42:35.767885 1639474 logs.go:282] 0 containers: []
	W1216 06:42:35.767893 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:35.767897 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:35.767958 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:35.791769 1639474 cri.go:89] found id: ""
	I1216 06:42:35.791783 1639474 logs.go:282] 0 containers: []
	W1216 06:42:35.791790 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:35.791795 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:35.791854 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:35.819002 1639474 cri.go:89] found id: ""
	I1216 06:42:35.819016 1639474 logs.go:282] 0 containers: []
	W1216 06:42:35.819023 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:35.819028 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:35.819083 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:35.843378 1639474 cri.go:89] found id: ""
	I1216 06:42:35.843392 1639474 logs.go:282] 0 containers: []
	W1216 06:42:35.843399 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:35.843407 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:35.843417 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:35.912874 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:35.912893 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:35.930936 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:35.930952 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:36.006314 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:35.994455   16736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:35.995222   16736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:35.996889   16736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:35.997269   16736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:35.999528   16736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:35.994455   16736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:35.995222   16736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:35.996889   16736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:35.997269   16736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:35.999528   16736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:36.006326 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:36.006338 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:36.080077 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:36.080099 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:38.612139 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:38.622353 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:38.622412 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:38.648583 1639474 cri.go:89] found id: ""
	I1216 06:42:38.648597 1639474 logs.go:282] 0 containers: []
	W1216 06:42:38.648604 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:38.648613 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:38.648671 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:38.674035 1639474 cri.go:89] found id: ""
	I1216 06:42:38.674049 1639474 logs.go:282] 0 containers: []
	W1216 06:42:38.674056 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:38.674061 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:38.674119 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:38.699213 1639474 cri.go:89] found id: ""
	I1216 06:42:38.699228 1639474 logs.go:282] 0 containers: []
	W1216 06:42:38.699234 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:38.699239 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:38.699294 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:38.723415 1639474 cri.go:89] found id: ""
	I1216 06:42:38.723429 1639474 logs.go:282] 0 containers: []
	W1216 06:42:38.723436 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:38.723441 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:38.723499 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:38.751059 1639474 cri.go:89] found id: ""
	I1216 06:42:38.751074 1639474 logs.go:282] 0 containers: []
	W1216 06:42:38.751081 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:38.751086 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:38.751146 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:38.779542 1639474 cri.go:89] found id: ""
	I1216 06:42:38.779557 1639474 logs.go:282] 0 containers: []
	W1216 06:42:38.779584 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:38.779589 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:38.779660 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:38.813466 1639474 cri.go:89] found id: ""
	I1216 06:42:38.813480 1639474 logs.go:282] 0 containers: []
	W1216 06:42:38.813488 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:38.813496 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:38.813507 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:38.842140 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:38.842158 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:38.908007 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:38.908027 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:38.923600 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:38.923618 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:38.995488 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:38.986888   16852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:38.987379   16852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:38.988908   16852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:38.989502   16852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:38.991340   16852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:38.986888   16852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:38.987379   16852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:38.988908   16852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:38.989502   16852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:38.991340   16852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:38.995498 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:38.995509 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:41.565694 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:41.575799 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:41.575860 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:41.600796 1639474 cri.go:89] found id: ""
	I1216 06:42:41.600811 1639474 logs.go:282] 0 containers: []
	W1216 06:42:41.600817 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:41.600822 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:41.600879 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:41.625792 1639474 cri.go:89] found id: ""
	I1216 06:42:41.625807 1639474 logs.go:282] 0 containers: []
	W1216 06:42:41.625814 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:41.625818 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:41.625875 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:41.650243 1639474 cri.go:89] found id: ""
	I1216 06:42:41.650257 1639474 logs.go:282] 0 containers: []
	W1216 06:42:41.650264 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:41.650269 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:41.650328 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:41.675889 1639474 cri.go:89] found id: ""
	I1216 06:42:41.675915 1639474 logs.go:282] 0 containers: []
	W1216 06:42:41.675923 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:41.675928 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:41.675993 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:41.703050 1639474 cri.go:89] found id: ""
	I1216 06:42:41.703064 1639474 logs.go:282] 0 containers: []
	W1216 06:42:41.703082 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:41.703088 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:41.703146 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:41.729269 1639474 cri.go:89] found id: ""
	I1216 06:42:41.729283 1639474 logs.go:282] 0 containers: []
	W1216 06:42:41.729293 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:41.729299 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:41.729369 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:41.753781 1639474 cri.go:89] found id: ""
	I1216 06:42:41.753796 1639474 logs.go:282] 0 containers: []
	W1216 06:42:41.753803 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:41.753811 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:41.753821 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:41.783522 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:41.783538 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:41.848274 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:41.848295 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:41.863600 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:41.863618 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:41.936160 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:41.927245   16955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:41.928139   16955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:41.929727   16955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:41.930266   16955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:41.931845   16955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:41.927245   16955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:41.928139   16955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:41.929727   16955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:41.930266   16955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:41.931845   16955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:41.936170 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:41.936181 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:44.511341 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:44.521587 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:44.521648 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:44.547007 1639474 cri.go:89] found id: ""
	I1216 06:42:44.547021 1639474 logs.go:282] 0 containers: []
	W1216 06:42:44.547028 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:44.547033 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:44.547096 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:44.572902 1639474 cri.go:89] found id: ""
	I1216 06:42:44.572917 1639474 logs.go:282] 0 containers: []
	W1216 06:42:44.572924 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:44.572928 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:44.572995 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:44.598645 1639474 cri.go:89] found id: ""
	I1216 06:42:44.598659 1639474 logs.go:282] 0 containers: []
	W1216 06:42:44.598667 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:44.598672 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:44.598731 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:44.627366 1639474 cri.go:89] found id: ""
	I1216 06:42:44.627381 1639474 logs.go:282] 0 containers: []
	W1216 06:42:44.627388 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:44.627396 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:44.627452 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:44.654294 1639474 cri.go:89] found id: ""
	I1216 06:42:44.654309 1639474 logs.go:282] 0 containers: []
	W1216 06:42:44.654319 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:44.654324 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:44.654382 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:44.679363 1639474 cri.go:89] found id: ""
	I1216 06:42:44.679378 1639474 logs.go:282] 0 containers: []
	W1216 06:42:44.679385 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:44.679392 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:44.679452 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:44.714760 1639474 cri.go:89] found id: ""
	I1216 06:42:44.714775 1639474 logs.go:282] 0 containers: []
	W1216 06:42:44.714781 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:44.714789 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:44.714800 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:44.779035 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:44.779055 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:44.793727 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:44.793745 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:44.860570 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:44.851694   17051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:44.852237   17051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:44.853933   17051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:44.854480   17051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:44.856105   17051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:44.851694   17051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:44.852237   17051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:44.853933   17051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:44.854480   17051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:44.856105   17051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:44.860581 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:44.860594 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:44.934290 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:44.934310 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:47.465385 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:47.475377 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:47.475436 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:47.503015 1639474 cri.go:89] found id: ""
	I1216 06:42:47.503042 1639474 logs.go:282] 0 containers: []
	W1216 06:42:47.503049 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:47.503055 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:47.503136 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:47.528903 1639474 cri.go:89] found id: ""
	I1216 06:42:47.528917 1639474 logs.go:282] 0 containers: []
	W1216 06:42:47.528924 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:47.528929 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:47.528989 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:47.554766 1639474 cri.go:89] found id: ""
	I1216 06:42:47.554781 1639474 logs.go:282] 0 containers: []
	W1216 06:42:47.554788 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:47.554792 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:47.554858 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:47.585092 1639474 cri.go:89] found id: ""
	I1216 06:42:47.585106 1639474 logs.go:282] 0 containers: []
	W1216 06:42:47.585113 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:47.585118 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:47.585214 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:47.610493 1639474 cri.go:89] found id: ""
	I1216 06:42:47.610508 1639474 logs.go:282] 0 containers: []
	W1216 06:42:47.610514 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:47.610519 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:47.610577 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:47.635340 1639474 cri.go:89] found id: ""
	I1216 06:42:47.635354 1639474 logs.go:282] 0 containers: []
	W1216 06:42:47.635361 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:47.635365 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:47.635424 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:47.661321 1639474 cri.go:89] found id: ""
	I1216 06:42:47.661335 1639474 logs.go:282] 0 containers: []
	W1216 06:42:47.661342 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:47.661349 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:47.661360 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:47.726879 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:47.726898 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:47.741659 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:47.741684 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:47.804784 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:47.796440   17154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:47.797188   17154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:47.798787   17154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:47.799294   17154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:47.800945   17154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:47.796440   17154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:47.797188   17154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:47.798787   17154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:47.799294   17154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:47.800945   17154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:47.804795 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:47.804807 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:47.871075 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:47.871096 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:50.410207 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:50.419946 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:50.420007 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:50.446668 1639474 cri.go:89] found id: ""
	I1216 06:42:50.446683 1639474 logs.go:282] 0 containers: []
	W1216 06:42:50.446689 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:50.446694 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:50.446753 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:50.471089 1639474 cri.go:89] found id: ""
	I1216 06:42:50.471119 1639474 logs.go:282] 0 containers: []
	W1216 06:42:50.471126 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:50.471131 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:50.471201 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:50.496821 1639474 cri.go:89] found id: ""
	I1216 06:42:50.496836 1639474 logs.go:282] 0 containers: []
	W1216 06:42:50.496843 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:50.496848 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:50.496906 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:50.522621 1639474 cri.go:89] found id: ""
	I1216 06:42:50.522647 1639474 logs.go:282] 0 containers: []
	W1216 06:42:50.522655 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:50.522660 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:50.522720 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:50.547813 1639474 cri.go:89] found id: ""
	I1216 06:42:50.547828 1639474 logs.go:282] 0 containers: []
	W1216 06:42:50.547847 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:50.547858 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:50.547926 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:50.573695 1639474 cri.go:89] found id: ""
	I1216 06:42:50.573709 1639474 logs.go:282] 0 containers: []
	W1216 06:42:50.573716 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:50.573734 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:50.573791 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:50.597701 1639474 cri.go:89] found id: ""
	I1216 06:42:50.597728 1639474 logs.go:282] 0 containers: []
	W1216 06:42:50.597735 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:50.597743 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:50.597754 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:50.634166 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:50.634183 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:50.700131 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:50.700152 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:50.714678 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:50.714694 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:50.782436 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:50.773358   17266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:50.773772   17266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:50.775550   17266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:50.775862   17266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:50.778084   17266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:50.773358   17266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:50.773772   17266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:50.775550   17266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:50.775862   17266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:50.778084   17266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:50.782446 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:50.782457 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:53.352592 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:53.362386 1639474 kubeadm.go:602] duration metric: took 4m3.23343297s to restartPrimaryControlPlane
	W1216 06:42:53.362440 1639474 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 06:42:53.362522 1639474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 06:42:53.770157 1639474 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 06:42:53.783560 1639474 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 06:42:53.791651 1639474 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 06:42:53.791714 1639474 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 06:42:53.800044 1639474 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 06:42:53.800054 1639474 kubeadm.go:158] found existing configuration files:
	
	I1216 06:42:53.800109 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 06:42:53.808053 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 06:42:53.808117 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 06:42:53.815698 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 06:42:53.823700 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 06:42:53.823760 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 06:42:53.831721 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 06:42:53.840020 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 06:42:53.840081 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 06:42:53.848003 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 06:42:53.856083 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 06:42:53.856151 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 06:42:53.863882 1639474 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 06:42:53.905755 1639474 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 06:42:53.905814 1639474 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:42:53.975149 1639474 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 06:42:53.975215 1639474 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 06:42:53.975250 1639474 kubeadm.go:319] OS: Linux
	I1216 06:42:53.975294 1639474 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 06:42:53.975341 1639474 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 06:42:53.975388 1639474 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 06:42:53.975435 1639474 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 06:42:53.975482 1639474 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 06:42:53.975528 1639474 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 06:42:53.975572 1639474 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 06:42:53.975619 1639474 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 06:42:53.975663 1639474 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 06:42:54.043340 1639474 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:42:54.043458 1639474 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:42:54.043554 1639474 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:42:54.051413 1639474 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:42:54.053411 1639474 out.go:252]   - Generating certificates and keys ...
	I1216 06:42:54.053534 1639474 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:42:54.053635 1639474 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:42:54.053726 1639474 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 06:42:54.053790 1639474 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 06:42:54.053864 1639474 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 06:42:54.053921 1639474 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 06:42:54.054179 1639474 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 06:42:54.054243 1639474 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 06:42:54.054338 1639474 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 06:42:54.054707 1639474 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 06:42:54.054967 1639474 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 06:42:54.055037 1639474 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:42:54.157358 1639474 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:42:54.374409 1639474 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:42:54.451048 1639474 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:42:54.729890 1639474 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:42:55.123905 1639474 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:42:55.124705 1639474 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:42:55.129362 1639474 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:42:55.130938 1639474 out.go:252]   - Booting up control plane ...
	I1216 06:42:55.131069 1639474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:42:55.131195 1639474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:42:55.132057 1639474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:42:55.147012 1639474 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:42:55.147116 1639474 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:42:55.155648 1639474 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:42:55.155999 1639474 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:42:55.156106 1639474 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:42:55.287137 1639474 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:42:55.287251 1639474 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:46:55.288217 1639474 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001159637s
	I1216 06:46:55.288243 1639474 kubeadm.go:319] 
	I1216 06:46:55.288304 1639474 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 06:46:55.288336 1639474 kubeadm.go:319] 	- The kubelet is not running
	I1216 06:46:55.288440 1639474 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 06:46:55.288445 1639474 kubeadm.go:319] 
	I1216 06:46:55.288565 1639474 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 06:46:55.288597 1639474 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 06:46:55.288627 1639474 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 06:46:55.288630 1639474 kubeadm.go:319] 
	I1216 06:46:55.292707 1639474 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 06:46:55.293173 1639474 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 06:46:55.293300 1639474 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 06:46:55.293545 1639474 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 06:46:55.293552 1639474 kubeadm.go:319] 
	I1216 06:46:55.293641 1639474 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1216 06:46:55.293765 1639474 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001159637s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 06:46:55.293855 1639474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 06:46:55.704413 1639474 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 06:46:55.717800 1639474 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 06:46:55.717860 1639474 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 06:46:55.726221 1639474 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 06:46:55.726230 1639474 kubeadm.go:158] found existing configuration files:
	
	I1216 06:46:55.726283 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 06:46:55.734520 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 06:46:55.734578 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 06:46:55.742443 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 06:46:55.750333 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 06:46:55.750396 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 06:46:55.758306 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 06:46:55.766326 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 06:46:55.766405 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 06:46:55.774041 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 06:46:55.782003 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 06:46:55.782061 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 06:46:55.789651 1639474 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 06:46:55.828645 1639474 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 06:46:55.828882 1639474 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:46:55.903247 1639474 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 06:46:55.903309 1639474 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 06:46:55.903344 1639474 kubeadm.go:319] OS: Linux
	I1216 06:46:55.903387 1639474 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 06:46:55.903435 1639474 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 06:46:55.903481 1639474 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 06:46:55.903528 1639474 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 06:46:55.903575 1639474 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 06:46:55.903627 1639474 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 06:46:55.903672 1639474 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 06:46:55.903719 1639474 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 06:46:55.903764 1639474 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 06:46:55.978404 1639474 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:46:55.978523 1639474 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:46:55.978635 1639474 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:46:55.988968 1639474 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:46:55.992562 1639474 out.go:252]   - Generating certificates and keys ...
	I1216 06:46:55.992651 1639474 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:46:55.992728 1639474 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:46:55.992809 1639474 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 06:46:55.992874 1639474 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 06:46:55.992948 1639474 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 06:46:55.993006 1639474 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 06:46:55.993073 1639474 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 06:46:55.993138 1639474 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 06:46:55.993217 1639474 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 06:46:55.993295 1639474 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 06:46:55.993334 1639474 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 06:46:55.993394 1639474 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:46:56.216895 1639474 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:46:56.479326 1639474 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:46:56.885081 1639474 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:46:57.284813 1639474 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:46:57.705019 1639474 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:46:57.705808 1639474 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:46:57.708929 1639474 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:46:57.712185 1639474 out.go:252]   - Booting up control plane ...
	I1216 06:46:57.712286 1639474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:46:57.712364 1639474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:46:57.713358 1639474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:46:57.728440 1639474 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:46:57.729026 1639474 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:46:57.736761 1639474 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:46:57.737279 1639474 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:46:57.737495 1639474 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:46:57.864121 1639474 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:46:57.864234 1639474 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:50:57.863911 1639474 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000152952s
	I1216 06:50:57.863934 1639474 kubeadm.go:319] 
	I1216 06:50:57.863990 1639474 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 06:50:57.864023 1639474 kubeadm.go:319] 	- The kubelet is not running
	I1216 06:50:57.864128 1639474 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 06:50:57.864133 1639474 kubeadm.go:319] 
	I1216 06:50:57.864236 1639474 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 06:50:57.864267 1639474 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 06:50:57.864298 1639474 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 06:50:57.864301 1639474 kubeadm.go:319] 
	I1216 06:50:57.868420 1639474 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 06:50:57.868920 1639474 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 06:50:57.869030 1639474 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 06:50:57.869291 1639474 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1216 06:50:57.869296 1639474 kubeadm.go:319] 
	I1216 06:50:57.869364 1639474 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 06:50:57.869421 1639474 kubeadm.go:403] duration metric: took 12m7.776167752s to StartCluster
	I1216 06:50:57.869453 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:50:57.869520 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:50:57.901135 1639474 cri.go:89] found id: ""
	I1216 06:50:57.901151 1639474 logs.go:282] 0 containers: []
	W1216 06:50:57.901158 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:50:57.901163 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:50:57.901226 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:50:57.925331 1639474 cri.go:89] found id: ""
	I1216 06:50:57.925345 1639474 logs.go:282] 0 containers: []
	W1216 06:50:57.925352 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:50:57.925357 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:50:57.925415 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:50:57.950341 1639474 cri.go:89] found id: ""
	I1216 06:50:57.950356 1639474 logs.go:282] 0 containers: []
	W1216 06:50:57.950363 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:50:57.950367 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:50:57.950426 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:50:57.975123 1639474 cri.go:89] found id: ""
	I1216 06:50:57.975137 1639474 logs.go:282] 0 containers: []
	W1216 06:50:57.975144 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:50:57.975149 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:50:57.975208 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:50:58.004659 1639474 cri.go:89] found id: ""
	I1216 06:50:58.004676 1639474 logs.go:282] 0 containers: []
	W1216 06:50:58.004684 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:50:58.004689 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:50:58.004760 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:50:58.030464 1639474 cri.go:89] found id: ""
	I1216 06:50:58.030478 1639474 logs.go:282] 0 containers: []
	W1216 06:50:58.030485 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:50:58.030491 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:50:58.030552 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:50:58.056049 1639474 cri.go:89] found id: ""
	I1216 06:50:58.056063 1639474 logs.go:282] 0 containers: []
	W1216 06:50:58.056071 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:50:58.056079 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:50:58.056091 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:50:58.124116 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:50:58.124137 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:50:58.139439 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:50:58.139455 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:50:58.229902 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:50:58.220695   21068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:50:58.221180   21068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:50:58.222906   21068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:50:58.223593   21068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:50:58.225247   21068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:50:58.220695   21068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:50:58.221180   21068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:50:58.222906   21068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:50:58.223593   21068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:50:58.225247   21068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:50:58.229914 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:50:58.229925 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:50:58.301956 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:50:58.301977 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1216 06:50:58.330306 1639474 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000152952s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1216 06:50:58.330348 1639474 out.go:285] * 
	W1216 06:50:58.330448 1639474 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000152952s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 06:50:58.330506 1639474 out.go:285] * 
	W1216 06:50:58.332927 1639474 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 06:50:58.338210 1639474 out.go:203] 
	W1216 06:50:58.341028 1639474 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000152952s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 06:50:58.341164 1639474 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 06:50:58.341212 1639474 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 06:50:58.344413 1639474 out.go:203] 
	
	
	==> CRI-O <==
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553471769Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553507896Z" level=info msg="Starting seccomp notifier watcher"
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553554657Z" level=info msg="Create NRI interface"
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553657485Z" level=info msg="built-in NRI default validator is disabled"
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553665107Z" level=info msg="runtime interface created"
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553674699Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553680746Z" level=info msg="runtime interface starting up..."
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553686137Z" level=info msg="starting plugins..."
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553698814Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553771561Z" level=info msg="No systemd watchdog enabled"
	Dec 16 06:38:48 functional-364120 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 16 06:42:54 functional-364120 crio[9872]: time="2025-12-16T06:42:54.046654305Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=2afa36a7-e595-4e9e-9866-100014f74db0 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:42:54 functional-364120 crio[9872]: time="2025-12-16T06:42:54.047561496Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=bfee085e-d788-43aa-852e-e818968557f8 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:42:54 functional-364120 crio[9872]: time="2025-12-16T06:42:54.048165668Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=8209edd3-2ad3-4cea-9d15-760a1b94c10d name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:42:54 functional-364120 crio[9872]: time="2025-12-16T06:42:54.048839782Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=f38b3b25-171e-488b-9dbb-3a4615d07ce7 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:42:54 functional-364120 crio[9872]: time="2025-12-16T06:42:54.049385123Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=674d3a91-05c7-4375-a638-2bb51d77e82a name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:42:54 functional-364120 crio[9872]: time="2025-12-16T06:42:54.049934157Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=d7315967-45e5-4ab2-b579-15a88e3c5cf5 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:42:54 functional-364120 crio[9872]: time="2025-12-16T06:42:54.050441213Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=d2d27746-f739-4711-a521-d245b78e775c name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.981581513Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=cc27c34f-1129-41fd-83b5-8698b0697603 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.982462832Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=f632e983-ad57-48b2-98c3-8802e4b6bb91 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.982972654Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=4d99c7a4-52a2-4a4f-9569-9d8a29ee230d name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.983463866Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=824a4ba3-63ed-49ce-a194-3bf34f462483 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.983972891Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=52cc52f0-f1ca-4fc4-a91a-13dd8c19e754 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.984501125Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=baf81d2d-269c-44fd-a82c-811876adf596 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.984974015Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=88fe0e4e-4ea7-4b38-a635-f3138f370377 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:50:59.567676   21190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:50:59.568056   21190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:50:59.569658   21190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:50:59.569999   21190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:50:59.571698   21190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 06:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec16 06:13] overlayfs: idmapped layers are currently not supported
	[Dec16 06:19] overlayfs: idmapped layers are currently not supported
	[Dec16 06:20] overlayfs: idmapped layers are currently not supported
	[Dec16 06:38] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 06:50:59 up  9:33,  0 user,  load average: 0.04, 0.15, 0.43
	Linux functional-364120 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 06:50:56 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:50:57 functional-364120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 961.
	Dec 16 06:50:57 functional-364120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:50:57 functional-364120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:50:57 functional-364120 kubelet[20996]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:50:57 functional-364120 kubelet[20996]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:50:57 functional-364120 kubelet[20996]: E1216 06:50:57.444208   20996 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:50:57 functional-364120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:50:57 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:50:58 functional-364120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 962.
	Dec 16 06:50:58 functional-364120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:50:58 functional-364120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:50:58 functional-364120 kubelet[21067]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:50:58 functional-364120 kubelet[21067]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:50:58 functional-364120 kubelet[21067]: E1216 06:50:58.204216   21067 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:50:58 functional-364120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:50:58 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:50:58 functional-364120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 963.
	Dec 16 06:50:58 functional-364120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:50:58 functional-364120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:50:58 functional-364120 kubelet[21106]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:50:58 functional-364120 kubelet[21106]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:50:58 functional-364120 kubelet[21106]: E1216 06:50:58.962853   21106 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:50:58 functional-364120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:50:58 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-364120 -n functional-364120
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-364120 -n functional-364120: exit status 2 (347.13105ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-364120" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (735.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-364120 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-364120 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (67.613458ms)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-364120 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-364120
helpers_test.go:244: (dbg) docker inspect functional-364120:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf",
	        "Created": "2025-12-16T06:24:05.281524036Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1628059,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T06:24:05.346294886Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/hostname",
	        "HostsPath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/hosts",
	        "LogPath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf-json.log",
	        "Name": "/functional-364120",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-364120:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-364120",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf",
	                "LowerDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752-init/diff:/var/lib/docker/overlay2/bf9e5e3f04a34ae52d17b5e81aeacb3854428b2bda7b4fcb7e1d86558db759ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752/merged",
	                "UpperDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752/diff",
	                "WorkDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-364120",
	                "Source": "/var/lib/docker/volumes/functional-364120/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-364120",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-364120",
	                "name.minikube.sigs.k8s.io": "functional-364120",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ca8e444af5ea4dc220aae407b23205e89ee2c7bfaf0d7da28c0fa8a6e9438a0b",
	            "SandboxKey": "/var/run/docker/netns/ca8e444af5ea",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34260"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34261"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34264"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34262"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34263"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-364120": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:28:ec:c3:f0:f5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a6847428577f52c75d7f6ab7a92b3395c1204da1608971d5af98d3898a2210da",
	                    "EndpointID": "e579fd8a0ba117da836073d37b7f617933568bedfc3fb52e056b4772aaddecbf",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-364120",
	                        "8e0dcfb5d015"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-364120 -n functional-364120
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-364120 -n functional-364120: exit status 2 (307.328393ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-487532 image ls --format json --alsologtostderr                                                                                        │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image          │ functional-487532 image ls --format table --alsologtostderr                                                                                       │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ update-context │ functional-487532 update-context --alsologtostderr -v=2                                                                                           │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ update-context │ functional-487532 update-context --alsologtostderr -v=2                                                                                           │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ update-context │ functional-487532 update-context --alsologtostderr -v=2                                                                                           │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image          │ functional-487532 image ls                                                                                                                        │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ delete         │ -p functional-487532                                                                                                                              │ functional-487532 │ jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:24 UTC │
	│ start          │ -p functional-364120 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:24 UTC │                     │
	│ start          │ -p functional-364120 --alsologtostderr -v=8                                                                                                       │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:32 UTC │                     │
	│ cache          │ functional-364120 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ cache          │ functional-364120 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ cache          │ functional-364120 cache add registry.k8s.io/pause:latest                                                                                          │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ cache          │ functional-364120 cache add minikube-local-cache-test:functional-364120                                                                           │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ cache          │ functional-364120 cache delete minikube-local-cache-test:functional-364120                                                                        │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ cache          │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ ssh            │ functional-364120 ssh sudo crictl images                                                                                                          │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ ssh            │ functional-364120 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ ssh            │ functional-364120 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │                     │
	│ cache          │ functional-364120 cache reload                                                                                                                    │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ ssh            │ functional-364120 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ kubectl        │ functional-364120 kubectl -- --context functional-364120 get pods                                                                                 │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │                     │
	│ start          │ -p functional-364120 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                          │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 06:38:45
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 06:38:45.382114 1639474 out.go:360] Setting OutFile to fd 1 ...
	I1216 06:38:45.382275 1639474 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:38:45.382279 1639474 out.go:374] Setting ErrFile to fd 2...
	I1216 06:38:45.382283 1639474 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:38:45.382644 1639474 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 06:38:45.383081 1639474 out.go:368] Setting JSON to false
	I1216 06:38:45.383946 1639474 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":33677,"bootTime":1765833449,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1216 06:38:45.384032 1639474 start.go:143] virtualization:  
	I1216 06:38:45.387610 1639474 out.go:179] * [functional-364120] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 06:38:45.391422 1639474 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 06:38:45.391485 1639474 notify.go:221] Checking for updates...
	I1216 06:38:45.397275 1639474 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 06:38:45.400538 1639474 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:38:45.403348 1639474 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube
	I1216 06:38:45.406183 1639474 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 06:38:45.410019 1639474 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 06:38:45.413394 1639474 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 06:38:45.413485 1639474 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 06:38:45.451796 1639474 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 06:38:45.451901 1639474 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:38:45.529304 1639474 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-16 06:38:45.519310041 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 06:38:45.529400 1639474 docker.go:319] overlay module found
	I1216 06:38:45.532456 1639474 out.go:179] * Using the docker driver based on existing profile
	I1216 06:38:45.535342 1639474 start.go:309] selected driver: docker
	I1216 06:38:45.535352 1639474 start.go:927] validating driver "docker" against &{Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:38:45.535432 1639474 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 06:38:45.535555 1639474 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:38:45.605792 1639474 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-16 06:38:45.594564391 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 06:38:45.606168 1639474 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 06:38:45.606189 1639474 cni.go:84] Creating CNI manager for ""
	I1216 06:38:45.606237 1639474 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 06:38:45.606285 1639474 start.go:353] cluster config:
	{Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:38:45.611347 1639474 out.go:179] * Starting "functional-364120" primary control-plane node in "functional-364120" cluster
	I1216 06:38:45.614388 1639474 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 06:38:45.617318 1639474 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 06:38:45.620204 1639474 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 06:38:45.620247 1639474 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1216 06:38:45.620256 1639474 cache.go:65] Caching tarball of preloaded images
	I1216 06:38:45.620287 1639474 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 06:38:45.620351 1639474 preload.go:238] Found /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 06:38:45.620360 1639474 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1216 06:38:45.620487 1639474 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/config.json ...
	I1216 06:38:45.639567 1639474 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 06:38:45.639578 1639474 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 06:38:45.639591 1639474 cache.go:243] Successfully downloaded all kic artifacts
	I1216 06:38:45.639630 1639474 start.go:360] acquireMachinesLock for functional-364120: {Name:mkbf042218fd4d1baa11f8b1e4a71170f4ad9912 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:38:45.639687 1639474 start.go:364] duration metric: took 37.908µs to acquireMachinesLock for "functional-364120"
	I1216 06:38:45.639706 1639474 start.go:96] Skipping create...Using existing machine configuration
	I1216 06:38:45.639711 1639474 fix.go:54] fixHost starting: 
	I1216 06:38:45.639996 1639474 cli_runner.go:164] Run: docker container inspect functional-364120 --format={{.State.Status}}
	I1216 06:38:45.656952 1639474 fix.go:112] recreateIfNeeded on functional-364120: state=Running err=<nil>
	W1216 06:38:45.656970 1639474 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 06:38:45.660116 1639474 out.go:252] * Updating the running docker "functional-364120" container ...
	I1216 06:38:45.660138 1639474 machine.go:94] provisionDockerMachine start ...
	I1216 06:38:45.660218 1639474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:38:45.677387 1639474 main.go:143] libmachine: Using SSH client type: native
	I1216 06:38:45.677705 1639474 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1216 06:38:45.677711 1639474 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 06:38:45.812247 1639474 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-364120
	
	I1216 06:38:45.812262 1639474 ubuntu.go:182] provisioning hostname "functional-364120"
	I1216 06:38:45.812325 1639474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:38:45.830038 1639474 main.go:143] libmachine: Using SSH client type: native
	I1216 06:38:45.830333 1639474 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1216 06:38:45.830342 1639474 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-364120 && echo "functional-364120" | sudo tee /etc/hostname
	I1216 06:38:45.969440 1639474 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-364120
	
	I1216 06:38:45.969519 1639474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:38:45.987438 1639474 main.go:143] libmachine: Using SSH client type: native
	I1216 06:38:45.987738 1639474 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1216 06:38:45.987751 1639474 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-364120' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-364120/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-364120' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 06:38:46.120750 1639474 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 06:38:46.120766 1639474 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-1596013/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-1596013/.minikube}
	I1216 06:38:46.120795 1639474 ubuntu.go:190] setting up certificates
	I1216 06:38:46.120811 1639474 provision.go:84] configureAuth start
	I1216 06:38:46.120880 1639474 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-364120
	I1216 06:38:46.139450 1639474 provision.go:143] copyHostCerts
	I1216 06:38:46.139518 1639474 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem, removing ...
	I1216 06:38:46.139535 1639474 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 06:38:46.139611 1639474 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem (1078 bytes)
	I1216 06:38:46.139701 1639474 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem, removing ...
	I1216 06:38:46.139705 1639474 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 06:38:46.139730 1639474 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem (1123 bytes)
	I1216 06:38:46.139777 1639474 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem, removing ...
	I1216 06:38:46.139780 1639474 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 06:38:46.139802 1639474 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem (1675 bytes)
	I1216 06:38:46.139846 1639474 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem org=jenkins.functional-364120 san=[127.0.0.1 192.168.49.2 functional-364120 localhost minikube]
	I1216 06:38:46.453267 1639474 provision.go:177] copyRemoteCerts
	I1216 06:38:46.453323 1639474 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 06:38:46.453367 1639474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:38:46.472384 1639474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:38:46.568304 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 06:38:46.585458 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 06:38:46.602822 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 06:38:46.619947 1639474 provision.go:87] duration metric: took 499.122604ms to configureAuth
	I1216 06:38:46.619964 1639474 ubuntu.go:206] setting minikube options for container-runtime
	I1216 06:38:46.620160 1639474 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 06:38:46.620252 1639474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:38:46.637350 1639474 main.go:143] libmachine: Using SSH client type: native
	I1216 06:38:46.637660 1639474 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1216 06:38:46.637671 1639474 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 06:38:46.957629 1639474 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 06:38:46.957641 1639474 machine.go:97] duration metric: took 1.297496853s to provisionDockerMachine
	I1216 06:38:46.957652 1639474 start.go:293] postStartSetup for "functional-364120" (driver="docker")
	I1216 06:38:46.957670 1639474 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 06:38:46.957741 1639474 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 06:38:46.957790 1639474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:38:46.978202 1639474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:38:47.080335 1639474 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 06:38:47.083578 1639474 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 06:38:47.083597 1639474 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 06:38:47.083607 1639474 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/addons for local assets ...
	I1216 06:38:47.083662 1639474 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/files for local assets ...
	I1216 06:38:47.083735 1639474 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> 15992552.pem in /etc/ssl/certs
	I1216 06:38:47.083808 1639474 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/test/nested/copy/1599255/hosts -> hosts in /etc/test/nested/copy/1599255
	I1216 06:38:47.083855 1639474 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1599255
	I1216 06:38:47.091346 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 06:38:47.108874 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/test/nested/copy/1599255/hosts --> /etc/test/nested/copy/1599255/hosts (40 bytes)
	I1216 06:38:47.126774 1639474 start.go:296] duration metric: took 169.103296ms for postStartSetup
	I1216 06:38:47.126870 1639474 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 06:38:47.126918 1639474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:38:47.145224 1639474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:38:47.237421 1639474 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 06:38:47.242526 1639474 fix.go:56] duration metric: took 1.602809118s for fixHost
	I1216 06:38:47.242542 1639474 start.go:83] releasing machines lock for "functional-364120", held for 1.602847814s
	I1216 06:38:47.242635 1639474 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-364120
	I1216 06:38:47.260121 1639474 ssh_runner.go:195] Run: cat /version.json
	I1216 06:38:47.260167 1639474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:38:47.260174 1639474 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 06:38:47.260224 1639474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:38:47.277503 1639474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:38:47.283903 1639474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:38:47.464356 1639474 ssh_runner.go:195] Run: systemctl --version
	I1216 06:38:47.476410 1639474 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 06:38:47.514461 1639474 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 06:38:47.518820 1639474 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 06:38:47.518882 1639474 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 06:38:47.526809 1639474 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 06:38:47.526823 1639474 start.go:496] detecting cgroup driver to use...
	I1216 06:38:47.526855 1639474 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:38:47.526909 1639474 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 06:38:47.542915 1639474 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:38:47.556456 1639474 docker.go:218] disabling cri-docker service (if available) ...
	I1216 06:38:47.556532 1639474 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 06:38:47.572387 1639474 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 06:38:47.585623 1639474 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 06:38:47.693830 1639474 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 06:38:47.836192 1639474 docker.go:234] disabling docker service ...
	I1216 06:38:47.836253 1639474 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 06:38:47.851681 1639474 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 06:38:47.865315 1639474 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 06:38:47.985223 1639474 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 06:38:48.104393 1639474 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 06:38:48.118661 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 06:38:48.136892 1639474 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 06:38:48.136961 1639474 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:38:48.147508 1639474 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 06:38:48.147579 1639474 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:38:48.156495 1639474 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:38:48.165780 1639474 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:38:48.174392 1639474 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 06:38:48.182433 1639474 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:38:48.191004 1639474 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:38:48.198914 1639474 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:38:48.207365 1639474 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 06:38:48.214548 1639474 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 06:38:48.221727 1639474 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:38:48.346771 1639474 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 06:38:48.562751 1639474 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 06:38:48.562822 1639474 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 06:38:48.566564 1639474 start.go:564] Will wait 60s for crictl version
	I1216 06:38:48.566626 1639474 ssh_runner.go:195] Run: which crictl
	I1216 06:38:48.570268 1639474 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 06:38:48.600286 1639474 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 06:38:48.600360 1639474 ssh_runner.go:195] Run: crio --version
	I1216 06:38:48.630102 1639474 ssh_runner.go:195] Run: crio --version
	I1216 06:38:48.662511 1639474 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1216 06:38:48.665401 1639474 cli_runner.go:164] Run: docker network inspect functional-364120 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 06:38:48.681394 1639474 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 06:38:48.688428 1639474 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1216 06:38:48.691264 1639474 kubeadm.go:884] updating cluster {Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 06:38:48.691424 1639474 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 06:38:48.691501 1639474 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 06:38:48.730823 1639474 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 06:38:48.730835 1639474 crio.go:433] Images already preloaded, skipping extraction
	I1216 06:38:48.730892 1639474 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 06:38:48.756054 1639474 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 06:38:48.756075 1639474 cache_images.go:86] Images are preloaded, skipping loading
	I1216 06:38:48.756081 1639474 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1216 06:38:48.756185 1639474 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-364120 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 06:38:48.756284 1639474 ssh_runner.go:195] Run: crio config
	I1216 06:38:48.821920 1639474 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1216 06:38:48.821940 1639474 cni.go:84] Creating CNI manager for ""
	I1216 06:38:48.821953 1639474 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 06:38:48.821961 1639474 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 06:38:48.821989 1639474 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-364120 NodeName:functional-364120 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 06:38:48.822118 1639474 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-364120"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 06:38:48.822186 1639474 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 06:38:48.830098 1639474 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 06:38:48.830166 1639474 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 06:38:48.837393 1639474 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1216 06:38:48.849769 1639474 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 06:38:48.862224 1639474 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1216 06:38:48.875020 1639474 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 06:38:48.878641 1639474 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:38:48.988462 1639474 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:38:49.398022 1639474 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120 for IP: 192.168.49.2
	I1216 06:38:49.398033 1639474 certs.go:195] generating shared ca certs ...
	I1216 06:38:49.398047 1639474 certs.go:227] acquiring lock for ca certs: {Name:mkbf72d2e438185e2867d262e148d82e5455cccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:38:49.398216 1639474 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key
	I1216 06:38:49.398259 1639474 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key
	I1216 06:38:49.398266 1639474 certs.go:257] generating profile certs ...
	I1216 06:38:49.398355 1639474 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.key
	I1216 06:38:49.398397 1639474 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.key.a6be103a
	I1216 06:38:49.398442 1639474 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.key
	I1216 06:38:49.398557 1639474 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem (1338 bytes)
	W1216 06:38:49.398591 1639474 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255_empty.pem, impossibly tiny 0 bytes
	I1216 06:38:49.398598 1639474 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 06:38:49.398627 1639474 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem (1078 bytes)
	I1216 06:38:49.398648 1639474 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem (1123 bytes)
	I1216 06:38:49.398673 1639474 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem (1675 bytes)
	I1216 06:38:49.398722 1639474 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 06:38:49.399378 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 06:38:49.420435 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 06:38:49.440537 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 06:38:49.460786 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 06:38:49.480628 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 06:38:49.497487 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 06:38:49.514939 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 06:38:49.532313 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 06:38:49.550215 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem --> /usr/share/ca-certificates/1599255.pem (1338 bytes)
	I1216 06:38:49.580225 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /usr/share/ca-certificates/15992552.pem (1708 bytes)
	I1216 06:38:49.597583 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 06:38:49.615627 1639474 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 06:38:49.629067 1639474 ssh_runner.go:195] Run: openssl version
	I1216 06:38:49.635264 1639474 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1599255.pem
	I1216 06:38:49.642707 1639474 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1599255.pem /etc/ssl/certs/1599255.pem
	I1216 06:38:49.650527 1639474 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1599255.pem
	I1216 06:38:49.654313 1639474 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 06:24 /usr/share/ca-certificates/1599255.pem
	I1216 06:38:49.654369 1639474 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1599255.pem
	I1216 06:38:49.695142 1639474 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 06:38:49.702542 1639474 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15992552.pem
	I1216 06:38:49.709833 1639474 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15992552.pem /etc/ssl/certs/15992552.pem
	I1216 06:38:49.717202 1639474 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15992552.pem
	I1216 06:38:49.720835 1639474 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 06:24 /usr/share/ca-certificates/15992552.pem
	I1216 06:38:49.720891 1639474 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15992552.pem
	I1216 06:38:49.762100 1639474 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 06:38:49.769702 1639474 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:38:49.777475 1639474 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 06:38:49.785134 1639474 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:38:49.789017 1639474 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:38:49.789075 1639474 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:38:49.830097 1639474 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 06:38:49.837887 1639474 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 06:38:49.841718 1639474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 06:38:49.883003 1639474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 06:38:49.923792 1639474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 06:38:49.964873 1639474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 06:38:50.009367 1639474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 06:38:50.051701 1639474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 06:38:50.093263 1639474 kubeadm.go:401] StartCluster: {Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:38:50.093349 1639474 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 06:38:50.093423 1639474 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 06:38:50.120923 1639474 cri.go:89] found id: ""
	I1216 06:38:50.120988 1639474 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 06:38:50.128935 1639474 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 06:38:50.128944 1639474 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 06:38:50.129001 1639474 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 06:38:50.136677 1639474 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 06:38:50.137223 1639474 kubeconfig.go:125] found "functional-364120" server: "https://192.168.49.2:8441"
	I1216 06:38:50.138591 1639474 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 06:38:50.148403 1639474 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-16 06:24:13.753381452 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-16 06:38:48.871691407 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1216 06:38:50.148423 1639474 kubeadm.go:1161] stopping kube-system containers ...
	I1216 06:38:50.148434 1639474 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 06:38:50.148512 1639474 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 06:38:50.182168 1639474 cri.go:89] found id: ""
	I1216 06:38:50.182231 1639474 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 06:38:50.201521 1639474 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 06:38:50.209281 1639474 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 16 06:28 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 16 06:28 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec 16 06:28 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 16 06:28 /etc/kubernetes/scheduler.conf
	
	I1216 06:38:50.209338 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 06:38:50.217195 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 06:38:50.224648 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 06:38:50.224702 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 06:38:50.231990 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 06:38:50.239836 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 06:38:50.239894 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 06:38:50.247352 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 06:38:50.254862 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 06:38:50.254916 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 06:38:50.262178 1639474 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 06:38:50.270092 1639474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 06:38:50.316982 1639474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 06:38:51.327287 1639474 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.010279379s)
	I1216 06:38:51.327357 1639474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 06:38:51.524152 1639474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 06:38:51.584718 1639474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 06:38:51.627519 1639474 api_server.go:52] waiting for apiserver process to appear ...
	I1216 06:38:51.627603 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:52.127996 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:52.628298 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:53.128739 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:53.628621 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:54.128741 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:54.627831 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:55.128517 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:55.628413 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:56.127788 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:56.627801 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:57.128288 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:57.628401 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:58.128329 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:58.627998 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:59.127831 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:59.628547 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:00.128439 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:00.628540 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:01.128146 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:01.627790 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:02.128721 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:02.628766 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:03.127780 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:03.628489 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:04.128439 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:04.627784 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:05.128544 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:05.627790 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:06.128535 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:06.627955 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:07.127765 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:07.627817 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:08.128692 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:08.628069 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:09.127788 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:09.627921 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:10.128708 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:10.627689 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:11.127821 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:11.627890 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:12.127687 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:12.628412 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:13.128182 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:13.627796 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:14.128611 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:14.628298 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:15.127795 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:15.628147 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:16.127806 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:16.627762 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:17.127677 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:17.628043 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:18.127752 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:18.627697 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:19.128437 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:19.627779 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:20.128353 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:20.628739 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:21.128542 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:21.628449 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:22.127780 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:22.628679 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:23.128464 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:23.628609 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:24.127698 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:24.628073 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:25.128615 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:25.627743 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:26.127794 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:26.628605 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:27.128439 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:27.627806 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:28.128571 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:28.628042 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:29.128637 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:29.627742 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:30.128694 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:30.627803 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:31.127790 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:31.628497 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:32.127786 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:32.627780 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:33.127788 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:33.627974 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:34.128440 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:34.628685 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:35.128622 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:35.628715 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:36.128328 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:36.628129 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:37.127678 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:37.628187 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:38.128724 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:38.627765 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:39.127823 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:39.627834 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:40.128417 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:40.628784 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:41.128501 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:41.628458 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:42.128381 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:42.627888 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:43.128387 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:43.627769 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:44.128638 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:44.627687 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:45.128571 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:45.628346 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:46.128443 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:46.628500 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:47.128632 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:47.628608 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:48.128412 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:48.628099 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:49.128601 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:49.627888 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:50.127801 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:50.628098 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:51.127749 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:51.627803 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:39:51.627880 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:39:51.662321 1639474 cri.go:89] found id: ""
	I1216 06:39:51.662334 1639474 logs.go:282] 0 containers: []
	W1216 06:39:51.662341 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:39:51.662347 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:39:51.662418 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:39:51.693006 1639474 cri.go:89] found id: ""
	I1216 06:39:51.693020 1639474 logs.go:282] 0 containers: []
	W1216 06:39:51.693027 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:39:51.693032 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:39:51.693091 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:39:51.719156 1639474 cri.go:89] found id: ""
	I1216 06:39:51.719169 1639474 logs.go:282] 0 containers: []
	W1216 06:39:51.719176 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:39:51.719181 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:39:51.719237 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:39:51.745402 1639474 cri.go:89] found id: ""
	I1216 06:39:51.745416 1639474 logs.go:282] 0 containers: []
	W1216 06:39:51.745423 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:39:51.745429 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:39:51.745492 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:39:51.771770 1639474 cri.go:89] found id: ""
	I1216 06:39:51.771784 1639474 logs.go:282] 0 containers: []
	W1216 06:39:51.771791 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:39:51.771796 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:39:51.771854 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:39:51.797172 1639474 cri.go:89] found id: ""
	I1216 06:39:51.797186 1639474 logs.go:282] 0 containers: []
	W1216 06:39:51.797192 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:39:51.797198 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:39:51.797257 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:39:51.825478 1639474 cri.go:89] found id: ""
	I1216 06:39:51.825492 1639474 logs.go:282] 0 containers: []
	W1216 06:39:51.825499 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:39:51.825506 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:39:51.825516 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:39:51.897574 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:39:51.897593 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:39:51.925635 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:39:51.925652 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:39:51.993455 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:39:51.993477 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:39:52.027866 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:39:52.027883 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:39:52.096535 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:39:52.087042   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:52.087741   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:52.089643   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:52.090378   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:52.092367   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:39:52.087042   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:52.087741   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:52.089643   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:52.090378   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:52.092367   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:39:54.597178 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:54.607445 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:39:54.607507 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:39:54.634705 1639474 cri.go:89] found id: ""
	I1216 06:39:54.634719 1639474 logs.go:282] 0 containers: []
	W1216 06:39:54.634733 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:39:54.634739 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:39:54.634800 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:39:54.668209 1639474 cri.go:89] found id: ""
	I1216 06:39:54.668223 1639474 logs.go:282] 0 containers: []
	W1216 06:39:54.668230 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:39:54.668235 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:39:54.668293 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:39:54.703300 1639474 cri.go:89] found id: ""
	I1216 06:39:54.703314 1639474 logs.go:282] 0 containers: []
	W1216 06:39:54.703321 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:39:54.703326 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:39:54.703385 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:39:54.732154 1639474 cri.go:89] found id: ""
	I1216 06:39:54.732168 1639474 logs.go:282] 0 containers: []
	W1216 06:39:54.732175 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:39:54.732180 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:39:54.732241 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:39:54.758222 1639474 cri.go:89] found id: ""
	I1216 06:39:54.758237 1639474 logs.go:282] 0 containers: []
	W1216 06:39:54.758244 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:39:54.758249 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:39:54.758309 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:39:54.783433 1639474 cri.go:89] found id: ""
	I1216 06:39:54.783456 1639474 logs.go:282] 0 containers: []
	W1216 06:39:54.783463 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:39:54.783474 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:39:54.783544 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:39:54.811264 1639474 cri.go:89] found id: ""
	I1216 06:39:54.811277 1639474 logs.go:282] 0 containers: []
	W1216 06:39:54.811284 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:39:54.811291 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:39:54.811302 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:39:54.876784 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:39:54.876805 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:39:54.891733 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:39:54.891749 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:39:54.963951 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:39:54.956444   11053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:54.956899   11053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:54.958408   11053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:54.958719   11053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:54.960134   11053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:39:54.956444   11053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:54.956899   11053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:54.958408   11053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:54.958719   11053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:54.960134   11053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:39:54.963962 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:39:54.963975 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:39:55.036358 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:39:55.036380 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:39:57.569339 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:57.579596 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:39:57.579659 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:39:57.604959 1639474 cri.go:89] found id: ""
	I1216 06:39:57.604973 1639474 logs.go:282] 0 containers: []
	W1216 06:39:57.604980 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:39:57.604985 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:39:57.605045 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:39:57.630710 1639474 cri.go:89] found id: ""
	I1216 06:39:57.630725 1639474 logs.go:282] 0 containers: []
	W1216 06:39:57.630731 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:39:57.630736 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:39:57.630794 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:39:57.662734 1639474 cri.go:89] found id: ""
	I1216 06:39:57.662748 1639474 logs.go:282] 0 containers: []
	W1216 06:39:57.662756 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:39:57.662773 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:39:57.662838 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:39:57.699847 1639474 cri.go:89] found id: ""
	I1216 06:39:57.699868 1639474 logs.go:282] 0 containers: []
	W1216 06:39:57.699875 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:39:57.699880 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:39:57.699941 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:39:57.726549 1639474 cri.go:89] found id: ""
	I1216 06:39:57.726563 1639474 logs.go:282] 0 containers: []
	W1216 06:39:57.726570 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:39:57.726575 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:39:57.726639 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:39:57.752583 1639474 cri.go:89] found id: ""
	I1216 06:39:57.752597 1639474 logs.go:282] 0 containers: []
	W1216 06:39:57.752604 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:39:57.752609 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:39:57.752667 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:39:57.780752 1639474 cri.go:89] found id: ""
	I1216 06:39:57.780767 1639474 logs.go:282] 0 containers: []
	W1216 06:39:57.780774 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:39:57.780782 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:39:57.780793 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:39:57.846931 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:39:57.846952 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:39:57.862606 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:39:57.862623 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:39:57.928743 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:39:57.917946   11160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:57.918582   11160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:57.920325   11160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:57.920838   11160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:57.922560   11160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:39:57.917946   11160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:57.918582   11160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:57.920325   11160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:57.920838   11160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:57.922560   11160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:39:57.928764 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:39:57.928775 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:39:57.997232 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:39:57.997254 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:00.537687 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:00.558059 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:00.558144 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:00.594907 1639474 cri.go:89] found id: ""
	I1216 06:40:00.594929 1639474 logs.go:282] 0 containers: []
	W1216 06:40:00.594939 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:00.594953 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:00.595036 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:00.628243 1639474 cri.go:89] found id: ""
	I1216 06:40:00.628272 1639474 logs.go:282] 0 containers: []
	W1216 06:40:00.628280 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:00.628294 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:00.628377 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:00.667757 1639474 cri.go:89] found id: ""
	I1216 06:40:00.667773 1639474 logs.go:282] 0 containers: []
	W1216 06:40:00.667791 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:00.667797 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:00.667873 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:00.707304 1639474 cri.go:89] found id: ""
	I1216 06:40:00.707319 1639474 logs.go:282] 0 containers: []
	W1216 06:40:00.707327 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:00.707333 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:00.707413 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:00.742620 1639474 cri.go:89] found id: ""
	I1216 06:40:00.742636 1639474 logs.go:282] 0 containers: []
	W1216 06:40:00.742644 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:00.742650 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:00.742727 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:00.772404 1639474 cri.go:89] found id: ""
	I1216 06:40:00.772421 1639474 logs.go:282] 0 containers: []
	W1216 06:40:00.772429 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:00.772435 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:00.772526 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:00.800238 1639474 cri.go:89] found id: ""
	I1216 06:40:00.800253 1639474 logs.go:282] 0 containers: []
	W1216 06:40:00.800260 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:00.800268 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:00.800280 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:00.866967 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:00.866989 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:00.883111 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:00.883127 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:00.951359 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:00.942477   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:00.943153   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:00.944836   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:00.945488   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:00.947367   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:00.942477   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:00.943153   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:00.944836   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:00.945488   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:00.947367   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:00.951371 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:00.951382 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:01.020844 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:01.020870 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:03.552704 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:03.563452 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:03.563545 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:03.588572 1639474 cri.go:89] found id: ""
	I1216 06:40:03.588585 1639474 logs.go:282] 0 containers: []
	W1216 06:40:03.588592 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:03.588598 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:03.588665 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:03.617698 1639474 cri.go:89] found id: ""
	I1216 06:40:03.617712 1639474 logs.go:282] 0 containers: []
	W1216 06:40:03.617719 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:03.617724 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:03.617784 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:03.643270 1639474 cri.go:89] found id: ""
	I1216 06:40:03.643285 1639474 logs.go:282] 0 containers: []
	W1216 06:40:03.643291 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:03.643296 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:03.643356 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:03.679135 1639474 cri.go:89] found id: ""
	I1216 06:40:03.679148 1639474 logs.go:282] 0 containers: []
	W1216 06:40:03.679155 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:03.679160 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:03.679217 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:03.707978 1639474 cri.go:89] found id: ""
	I1216 06:40:03.707991 1639474 logs.go:282] 0 containers: []
	W1216 06:40:03.707998 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:03.708003 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:03.708071 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:03.741796 1639474 cri.go:89] found id: ""
	I1216 06:40:03.741821 1639474 logs.go:282] 0 containers: []
	W1216 06:40:03.741827 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:03.741832 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:03.741899 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:03.767959 1639474 cri.go:89] found id: ""
	I1216 06:40:03.767983 1639474 logs.go:282] 0 containers: []
	W1216 06:40:03.767991 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:03.767998 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:03.768009 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:03.833601 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:03.833622 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:03.848136 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:03.848154 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:03.911646 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:03.902948   11373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:03.903628   11373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:03.905247   11373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:03.905737   11373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:03.907239   11373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:03.902948   11373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:03.903628   11373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:03.905247   11373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:03.905737   11373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:03.907239   11373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:03.911661 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:03.911672 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:03.980874 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:03.980894 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:06.512671 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:06.522859 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:06.522944 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:06.552384 1639474 cri.go:89] found id: ""
	I1216 06:40:06.552399 1639474 logs.go:282] 0 containers: []
	W1216 06:40:06.552406 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:06.552411 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:06.552492 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:06.577262 1639474 cri.go:89] found id: ""
	I1216 06:40:06.577276 1639474 logs.go:282] 0 containers: []
	W1216 06:40:06.577293 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:06.577299 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:06.577357 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:06.603757 1639474 cri.go:89] found id: ""
	I1216 06:40:06.603772 1639474 logs.go:282] 0 containers: []
	W1216 06:40:06.603779 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:06.603784 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:06.603850 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:06.629717 1639474 cri.go:89] found id: ""
	I1216 06:40:06.629732 1639474 logs.go:282] 0 containers: []
	W1216 06:40:06.629751 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:06.629756 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:06.629846 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:06.665006 1639474 cri.go:89] found id: ""
	I1216 06:40:06.665031 1639474 logs.go:282] 0 containers: []
	W1216 06:40:06.665039 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:06.665044 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:06.665109 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:06.698777 1639474 cri.go:89] found id: ""
	I1216 06:40:06.698791 1639474 logs.go:282] 0 containers: []
	W1216 06:40:06.698807 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:06.698813 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:06.698879 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:06.727424 1639474 cri.go:89] found id: ""
	I1216 06:40:06.727448 1639474 logs.go:282] 0 containers: []
	W1216 06:40:06.727455 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:06.727464 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:06.727475 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:06.758535 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:06.758552 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:06.827915 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:06.827944 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:06.843925 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:06.843949 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:06.913118 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:06.904403   11493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:06.905354   11493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:06.907146   11493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:06.907549   11493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:06.909175   11493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:06.904403   11493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:06.905354   11493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:06.907146   11493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:06.907549   11493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:06.909175   11493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:06.913128 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:06.913140 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:09.481120 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:09.491592 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:09.491658 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:09.518336 1639474 cri.go:89] found id: ""
	I1216 06:40:09.518351 1639474 logs.go:282] 0 containers: []
	W1216 06:40:09.518358 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:09.518363 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:09.518423 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:09.547930 1639474 cri.go:89] found id: ""
	I1216 06:40:09.547943 1639474 logs.go:282] 0 containers: []
	W1216 06:40:09.547950 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:09.547955 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:09.548012 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:09.574921 1639474 cri.go:89] found id: ""
	I1216 06:40:09.574935 1639474 logs.go:282] 0 containers: []
	W1216 06:40:09.574942 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:09.574947 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:09.575008 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:09.600427 1639474 cri.go:89] found id: ""
	I1216 06:40:09.600495 1639474 logs.go:282] 0 containers: []
	W1216 06:40:09.600502 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:09.600508 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:09.600567 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:09.628992 1639474 cri.go:89] found id: ""
	I1216 06:40:09.629006 1639474 logs.go:282] 0 containers: []
	W1216 06:40:09.629015 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:09.629019 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:09.629080 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:09.667383 1639474 cri.go:89] found id: ""
	I1216 06:40:09.667397 1639474 logs.go:282] 0 containers: []
	W1216 06:40:09.667404 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:09.667409 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:09.667468 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:09.710169 1639474 cri.go:89] found id: ""
	I1216 06:40:09.710183 1639474 logs.go:282] 0 containers: []
	W1216 06:40:09.710190 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:09.710197 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:09.710208 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:09.776054 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:09.776075 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:09.790720 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:09.790736 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:09.855182 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:09.847489   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:09.848014   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:09.849514   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:09.849979   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:09.851407   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:09.847489   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:09.848014   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:09.849514   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:09.849979   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:09.851407   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:09.855192 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:09.855204 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:09.922382 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:09.922402 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:12.451670 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:12.461890 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:12.461962 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:12.486630 1639474 cri.go:89] found id: ""
	I1216 06:40:12.486644 1639474 logs.go:282] 0 containers: []
	W1216 06:40:12.486650 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:12.486657 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:12.486719 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:12.514531 1639474 cri.go:89] found id: ""
	I1216 06:40:12.514545 1639474 logs.go:282] 0 containers: []
	W1216 06:40:12.514551 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:12.514558 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:12.514621 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:12.541612 1639474 cri.go:89] found id: ""
	I1216 06:40:12.541627 1639474 logs.go:282] 0 containers: []
	W1216 06:40:12.541633 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:12.541638 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:12.541703 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:12.567638 1639474 cri.go:89] found id: ""
	I1216 06:40:12.567652 1639474 logs.go:282] 0 containers: []
	W1216 06:40:12.567659 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:12.567664 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:12.567723 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:12.593074 1639474 cri.go:89] found id: ""
	I1216 06:40:12.593089 1639474 logs.go:282] 0 containers: []
	W1216 06:40:12.593096 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:12.593101 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:12.593164 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:12.621872 1639474 cri.go:89] found id: ""
	I1216 06:40:12.621886 1639474 logs.go:282] 0 containers: []
	W1216 06:40:12.621893 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:12.621898 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:12.621954 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:12.658898 1639474 cri.go:89] found id: ""
	I1216 06:40:12.658912 1639474 logs.go:282] 0 containers: []
	W1216 06:40:12.658919 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:12.658927 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:12.658939 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:12.736529 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:12.727901   11689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:12.728778   11689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:12.730401   11689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:12.730782   11689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:12.732350   11689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:12.727901   11689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:12.728778   11689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:12.730401   11689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:12.730782   11689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:12.732350   11689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:12.736540 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:12.736551 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:12.804860 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:12.804881 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:12.834018 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:12.834036 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:12.903542 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:12.903564 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:15.418582 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:15.428941 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:15.429002 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:15.458081 1639474 cri.go:89] found id: ""
	I1216 06:40:15.458096 1639474 logs.go:282] 0 containers: []
	W1216 06:40:15.458103 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:15.458109 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:15.458172 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:15.487644 1639474 cri.go:89] found id: ""
	I1216 06:40:15.487658 1639474 logs.go:282] 0 containers: []
	W1216 06:40:15.487665 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:15.487670 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:15.487729 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:15.512552 1639474 cri.go:89] found id: ""
	I1216 06:40:15.512565 1639474 logs.go:282] 0 containers: []
	W1216 06:40:15.512572 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:15.512577 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:15.512646 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:15.537944 1639474 cri.go:89] found id: ""
	I1216 06:40:15.537958 1639474 logs.go:282] 0 containers: []
	W1216 06:40:15.537965 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:15.537971 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:15.538030 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:15.574197 1639474 cri.go:89] found id: ""
	I1216 06:40:15.574211 1639474 logs.go:282] 0 containers: []
	W1216 06:40:15.574218 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:15.574223 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:15.574289 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:15.603183 1639474 cri.go:89] found id: ""
	I1216 06:40:15.603197 1639474 logs.go:282] 0 containers: []
	W1216 06:40:15.603204 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:15.603209 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:15.603272 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:15.628682 1639474 cri.go:89] found id: ""
	I1216 06:40:15.628696 1639474 logs.go:282] 0 containers: []
	W1216 06:40:15.628703 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:15.628710 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:15.628720 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:15.716665 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:15.704236   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:15.709021   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:15.710701   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:15.711201   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:15.712773   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:15.704236   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:15.709021   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:15.710701   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:15.711201   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:15.712773   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:15.716676 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:15.716687 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:15.787785 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:15.787806 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:15.815751 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:15.815772 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:15.885879 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:15.885902 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:18.402627 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:18.413143 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:18.413213 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:18.439934 1639474 cri.go:89] found id: ""
	I1216 06:40:18.439948 1639474 logs.go:282] 0 containers: []
	W1216 06:40:18.439956 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:18.439961 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:18.440023 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:18.467477 1639474 cri.go:89] found id: ""
	I1216 06:40:18.467491 1639474 logs.go:282] 0 containers: []
	W1216 06:40:18.467498 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:18.467503 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:18.467564 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:18.492982 1639474 cri.go:89] found id: ""
	I1216 06:40:18.493002 1639474 logs.go:282] 0 containers: []
	W1216 06:40:18.493009 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:18.493013 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:18.493073 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:18.519158 1639474 cri.go:89] found id: ""
	I1216 06:40:18.519173 1639474 logs.go:282] 0 containers: []
	W1216 06:40:18.519180 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:18.519185 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:18.519250 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:18.544672 1639474 cri.go:89] found id: ""
	I1216 06:40:18.544687 1639474 logs.go:282] 0 containers: []
	W1216 06:40:18.544694 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:18.544699 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:18.544760 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:18.574100 1639474 cri.go:89] found id: ""
	I1216 06:40:18.574115 1639474 logs.go:282] 0 containers: []
	W1216 06:40:18.574122 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:18.574127 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:18.574190 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:18.600048 1639474 cri.go:89] found id: ""
	I1216 06:40:18.600062 1639474 logs.go:282] 0 containers: []
	W1216 06:40:18.600069 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:18.600077 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:18.600087 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:18.670680 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:18.670700 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:18.686391 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:18.686408 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:18.756196 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:18.747313   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:18.748097   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:18.749918   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:18.750488   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:18.752058   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:18.747313   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:18.748097   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:18.749918   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:18.750488   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:18.752058   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:18.756206 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:18.756218 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:18.824602 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:18.824623 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:21.356152 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:21.366658 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:21.366719 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:21.391945 1639474 cri.go:89] found id: ""
	I1216 06:40:21.391959 1639474 logs.go:282] 0 containers: []
	W1216 06:40:21.391966 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:21.391971 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:21.392032 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:21.419561 1639474 cri.go:89] found id: ""
	I1216 06:40:21.419581 1639474 logs.go:282] 0 containers: []
	W1216 06:40:21.419588 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:21.419593 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:21.419662 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:21.446105 1639474 cri.go:89] found id: ""
	I1216 06:40:21.446119 1639474 logs.go:282] 0 containers: []
	W1216 06:40:21.446135 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:21.446143 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:21.446212 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:21.472095 1639474 cri.go:89] found id: ""
	I1216 06:40:21.472110 1639474 logs.go:282] 0 containers: []
	W1216 06:40:21.472117 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:21.472123 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:21.472188 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:21.502751 1639474 cri.go:89] found id: ""
	I1216 06:40:21.502766 1639474 logs.go:282] 0 containers: []
	W1216 06:40:21.502773 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:21.502778 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:21.502841 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:21.528514 1639474 cri.go:89] found id: ""
	I1216 06:40:21.528538 1639474 logs.go:282] 0 containers: []
	W1216 06:40:21.528546 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:21.528551 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:21.528623 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:21.554279 1639474 cri.go:89] found id: ""
	I1216 06:40:21.554293 1639474 logs.go:282] 0 containers: []
	W1216 06:40:21.554300 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:21.554308 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:21.554319 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:21.622775 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:21.614774   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:21.615497   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:21.617104   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:21.617588   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:21.618999   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:21.614774   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:21.615497   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:21.617104   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:21.617588   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:21.618999   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:21.622786 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:21.622795 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:21.692973 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:21.692993 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:21.722066 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:21.722083 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:21.789953 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:21.789974 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:24.305740 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:24.315908 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:24.315976 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:24.344080 1639474 cri.go:89] found id: ""
	I1216 06:40:24.344095 1639474 logs.go:282] 0 containers: []
	W1216 06:40:24.344102 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:24.344108 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:24.344169 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:24.370623 1639474 cri.go:89] found id: ""
	I1216 06:40:24.370638 1639474 logs.go:282] 0 containers: []
	W1216 06:40:24.370645 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:24.370649 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:24.370714 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:24.397678 1639474 cri.go:89] found id: ""
	I1216 06:40:24.397701 1639474 logs.go:282] 0 containers: []
	W1216 06:40:24.397709 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:24.397714 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:24.397787 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:24.427585 1639474 cri.go:89] found id: ""
	I1216 06:40:24.427599 1639474 logs.go:282] 0 containers: []
	W1216 06:40:24.427607 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:24.427612 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:24.427685 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:24.457451 1639474 cri.go:89] found id: ""
	I1216 06:40:24.457465 1639474 logs.go:282] 0 containers: []
	W1216 06:40:24.457472 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:24.457489 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:24.457562 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:24.483717 1639474 cri.go:89] found id: ""
	I1216 06:40:24.483731 1639474 logs.go:282] 0 containers: []
	W1216 06:40:24.483738 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:24.483743 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:24.483817 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:24.509734 1639474 cri.go:89] found id: ""
	I1216 06:40:24.509748 1639474 logs.go:282] 0 containers: []
	W1216 06:40:24.509756 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:24.509763 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:24.509774 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:24.575490 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:24.575510 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:24.590459 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:24.590476 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:24.660840 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:24.649877   12107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:24.651257   12107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:24.652433   12107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:24.653590   12107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:24.656273   12107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:24.649877   12107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:24.651257   12107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:24.652433   12107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:24.653590   12107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:24.656273   12107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:24.660854 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:24.660865 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:24.742683 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:24.742706 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:27.272978 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:27.283654 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:27.283721 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:27.310045 1639474 cri.go:89] found id: ""
	I1216 06:40:27.310060 1639474 logs.go:282] 0 containers: []
	W1216 06:40:27.310067 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:27.310072 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:27.310132 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:27.339319 1639474 cri.go:89] found id: ""
	I1216 06:40:27.339334 1639474 logs.go:282] 0 containers: []
	W1216 06:40:27.339342 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:27.339347 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:27.339409 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:27.366885 1639474 cri.go:89] found id: ""
	I1216 06:40:27.366901 1639474 logs.go:282] 0 containers: []
	W1216 06:40:27.366910 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:27.366915 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:27.366980 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:27.392968 1639474 cri.go:89] found id: ""
	I1216 06:40:27.392982 1639474 logs.go:282] 0 containers: []
	W1216 06:40:27.392989 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:27.392994 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:27.393072 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:27.425432 1639474 cri.go:89] found id: ""
	I1216 06:40:27.425446 1639474 logs.go:282] 0 containers: []
	W1216 06:40:27.425466 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:27.425471 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:27.425538 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:27.454875 1639474 cri.go:89] found id: ""
	I1216 06:40:27.454899 1639474 logs.go:282] 0 containers: []
	W1216 06:40:27.454906 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:27.454912 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:27.454982 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:27.480348 1639474 cri.go:89] found id: ""
	I1216 06:40:27.480363 1639474 logs.go:282] 0 containers: []
	W1216 06:40:27.480370 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:27.480378 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:27.480389 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:27.550687 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:27.550715 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:27.566692 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:27.566711 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:27.634204 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:27.625961   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:27.626967   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:27.628010   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:27.628680   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:27.630265   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:27.625961   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:27.626967   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:27.628010   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:27.628680   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:27.630265   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:27.634214 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:27.634227 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:27.706020 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:27.706040 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:30.238169 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:30.248488 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:30.248550 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:30.274527 1639474 cri.go:89] found id: ""
	I1216 06:40:30.274542 1639474 logs.go:282] 0 containers: []
	W1216 06:40:30.274549 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:30.274554 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:30.274615 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:30.300592 1639474 cri.go:89] found id: ""
	I1216 06:40:30.300610 1639474 logs.go:282] 0 containers: []
	W1216 06:40:30.300617 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:30.300624 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:30.300693 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:30.327939 1639474 cri.go:89] found id: ""
	I1216 06:40:30.327966 1639474 logs.go:282] 0 containers: []
	W1216 06:40:30.327973 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:30.327978 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:30.328040 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:30.358884 1639474 cri.go:89] found id: ""
	I1216 06:40:30.358898 1639474 logs.go:282] 0 containers: []
	W1216 06:40:30.358905 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:30.358910 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:30.358968 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:30.387991 1639474 cri.go:89] found id: ""
	I1216 06:40:30.388005 1639474 logs.go:282] 0 containers: []
	W1216 06:40:30.388012 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:30.388017 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:30.388090 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:30.413034 1639474 cri.go:89] found id: ""
	I1216 06:40:30.413048 1639474 logs.go:282] 0 containers: []
	W1216 06:40:30.413055 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:30.413059 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:30.413118 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:30.449975 1639474 cri.go:89] found id: ""
	I1216 06:40:30.450018 1639474 logs.go:282] 0 containers: []
	W1216 06:40:30.450034 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:30.450041 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:30.450053 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:30.466503 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:30.466521 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:30.528819 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:30.520846   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:30.521380   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:30.522897   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:30.523339   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:30.524879   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:30.520846   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:30.521380   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:30.522897   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:30.523339   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:30.524879   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:30.528828 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:30.528839 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:30.597696 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:30.597715 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:30.625300 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:30.625317 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:33.194250 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:33.204305 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:33.204368 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:33.229739 1639474 cri.go:89] found id: ""
	I1216 06:40:33.229753 1639474 logs.go:282] 0 containers: []
	W1216 06:40:33.229760 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:33.229765 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:33.229821 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:33.254131 1639474 cri.go:89] found id: ""
	I1216 06:40:33.254144 1639474 logs.go:282] 0 containers: []
	W1216 06:40:33.254151 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:33.254156 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:33.254214 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:33.279859 1639474 cri.go:89] found id: ""
	I1216 06:40:33.279881 1639474 logs.go:282] 0 containers: []
	W1216 06:40:33.279889 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:33.279894 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:33.279956 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:33.305951 1639474 cri.go:89] found id: ""
	I1216 06:40:33.305966 1639474 logs.go:282] 0 containers: []
	W1216 06:40:33.305973 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:33.305978 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:33.306037 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:33.335767 1639474 cri.go:89] found id: ""
	I1216 06:40:33.335781 1639474 logs.go:282] 0 containers: []
	W1216 06:40:33.335789 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:33.335793 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:33.335859 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:33.362761 1639474 cri.go:89] found id: ""
	I1216 06:40:33.362774 1639474 logs.go:282] 0 containers: []
	W1216 06:40:33.362781 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:33.362786 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:33.362843 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:33.389319 1639474 cri.go:89] found id: ""
	I1216 06:40:33.389334 1639474 logs.go:282] 0 containers: []
	W1216 06:40:33.389340 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:33.389348 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:33.389359 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:33.453913 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:33.444788   12421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:33.445454   12421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:33.447138   12421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:33.447727   12421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:33.449700   12421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:33.444788   12421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:33.445454   12421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:33.447138   12421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:33.447727   12421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:33.449700   12421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:33.453925 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:33.453936 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:33.522875 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:33.522895 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:33.556966 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:33.556981 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:33.624329 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:33.624350 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:36.139596 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:36.150559 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:36.150621 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:36.176931 1639474 cri.go:89] found id: ""
	I1216 06:40:36.176946 1639474 logs.go:282] 0 containers: []
	W1216 06:40:36.176954 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:36.176959 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:36.177023 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:36.203410 1639474 cri.go:89] found id: ""
	I1216 06:40:36.203424 1639474 logs.go:282] 0 containers: []
	W1216 06:40:36.203430 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:36.203435 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:36.203498 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:36.232378 1639474 cri.go:89] found id: ""
	I1216 06:40:36.232393 1639474 logs.go:282] 0 containers: []
	W1216 06:40:36.232399 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:36.232407 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:36.232504 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:36.258614 1639474 cri.go:89] found id: ""
	I1216 06:40:36.258636 1639474 logs.go:282] 0 containers: []
	W1216 06:40:36.258644 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:36.258649 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:36.258711 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:36.287134 1639474 cri.go:89] found id: ""
	I1216 06:40:36.287149 1639474 logs.go:282] 0 containers: []
	W1216 06:40:36.287156 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:36.287161 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:36.287225 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:36.316901 1639474 cri.go:89] found id: ""
	I1216 06:40:36.316915 1639474 logs.go:282] 0 containers: []
	W1216 06:40:36.316922 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:36.316927 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:36.316991 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:36.343964 1639474 cri.go:89] found id: ""
	I1216 06:40:36.343979 1639474 logs.go:282] 0 containers: []
	W1216 06:40:36.343988 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:36.343997 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:36.344009 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:36.409151 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:36.400502   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:36.401298   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:36.402984   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:36.403504   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:36.405133   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:36.400502   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:36.401298   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:36.402984   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:36.403504   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:36.405133   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:36.409161 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:36.409172 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:36.477694 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:36.477717 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:36.507334 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:36.507355 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:36.577747 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:36.577766 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:39.094282 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:39.105025 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:39.105089 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:39.131493 1639474 cri.go:89] found id: ""
	I1216 06:40:39.131507 1639474 logs.go:282] 0 containers: []
	W1216 06:40:39.131514 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:39.131525 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:39.131586 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:39.163796 1639474 cri.go:89] found id: ""
	I1216 06:40:39.163811 1639474 logs.go:282] 0 containers: []
	W1216 06:40:39.163819 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:39.163823 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:39.163886 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:39.191137 1639474 cri.go:89] found id: ""
	I1216 06:40:39.191152 1639474 logs.go:282] 0 containers: []
	W1216 06:40:39.191160 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:39.191165 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:39.191226 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:39.217834 1639474 cri.go:89] found id: ""
	I1216 06:40:39.217850 1639474 logs.go:282] 0 containers: []
	W1216 06:40:39.217857 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:39.217862 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:39.217926 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:39.244937 1639474 cri.go:89] found id: ""
	I1216 06:40:39.244951 1639474 logs.go:282] 0 containers: []
	W1216 06:40:39.244958 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:39.244963 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:39.245026 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:39.274684 1639474 cri.go:89] found id: ""
	I1216 06:40:39.274698 1639474 logs.go:282] 0 containers: []
	W1216 06:40:39.274706 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:39.274711 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:39.274774 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:39.302124 1639474 cri.go:89] found id: ""
	I1216 06:40:39.302138 1639474 logs.go:282] 0 containers: []
	W1216 06:40:39.302145 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:39.302153 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:39.302163 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:39.370146 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:39.370166 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:39.397930 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:39.397946 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:39.469905 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:39.469925 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:39.487153 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:39.487169 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:39.556831 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:39.547994   12655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:39.548793   12655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:39.550966   12655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:39.551537   12655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:39.552926   12655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:39.547994   12655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:39.548793   12655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:39.550966   12655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:39.551537   12655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:39.552926   12655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:42.057113 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:42.068649 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:42.068719 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:42.098202 1639474 cri.go:89] found id: ""
	I1216 06:40:42.098217 1639474 logs.go:282] 0 containers: []
	W1216 06:40:42.098224 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:42.098229 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:42.098294 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:42.130680 1639474 cri.go:89] found id: ""
	I1216 06:40:42.130696 1639474 logs.go:282] 0 containers: []
	W1216 06:40:42.130703 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:42.130708 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:42.130779 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:42.167131 1639474 cri.go:89] found id: ""
	I1216 06:40:42.167146 1639474 logs.go:282] 0 containers: []
	W1216 06:40:42.167153 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:42.167160 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:42.167230 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:42.197324 1639474 cri.go:89] found id: ""
	I1216 06:40:42.197339 1639474 logs.go:282] 0 containers: []
	W1216 06:40:42.197346 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:42.197352 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:42.197420 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:42.225831 1639474 cri.go:89] found id: ""
	I1216 06:40:42.225848 1639474 logs.go:282] 0 containers: []
	W1216 06:40:42.225856 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:42.225861 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:42.225930 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:42.257762 1639474 cri.go:89] found id: ""
	I1216 06:40:42.257777 1639474 logs.go:282] 0 containers: []
	W1216 06:40:42.257786 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:42.257792 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:42.257852 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:42.284492 1639474 cri.go:89] found id: ""
	I1216 06:40:42.284507 1639474 logs.go:282] 0 containers: []
	W1216 06:40:42.284515 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:42.284523 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:42.284535 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:42.351298 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:42.351319 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:42.367176 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:42.367193 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:42.433375 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:42.424339   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:42.425590   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:42.426458   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:42.427469   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:42.429024   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:42.424339   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:42.425590   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:42.426458   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:42.427469   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:42.429024   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:42.433386 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:42.433396 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:42.500708 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:42.500729 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:45.031368 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:45.055503 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:45.055570 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:45.098074 1639474 cri.go:89] found id: ""
	I1216 06:40:45.098091 1639474 logs.go:282] 0 containers: []
	W1216 06:40:45.098100 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:45.098105 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:45.098174 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:45.144642 1639474 cri.go:89] found id: ""
	I1216 06:40:45.144658 1639474 logs.go:282] 0 containers: []
	W1216 06:40:45.144666 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:45.144671 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:45.144743 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:45.177748 1639474 cri.go:89] found id: ""
	I1216 06:40:45.177777 1639474 logs.go:282] 0 containers: []
	W1216 06:40:45.177786 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:45.177792 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:45.177875 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:45.237332 1639474 cri.go:89] found id: ""
	I1216 06:40:45.237350 1639474 logs.go:282] 0 containers: []
	W1216 06:40:45.237368 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:45.237373 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:45.237462 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:45.277580 1639474 cri.go:89] found id: ""
	I1216 06:40:45.277608 1639474 logs.go:282] 0 containers: []
	W1216 06:40:45.277625 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:45.277631 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:45.277787 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:45.319169 1639474 cri.go:89] found id: ""
	I1216 06:40:45.319184 1639474 logs.go:282] 0 containers: []
	W1216 06:40:45.319192 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:45.319198 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:45.319268 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:45.355649 1639474 cri.go:89] found id: ""
	I1216 06:40:45.355663 1639474 logs.go:282] 0 containers: []
	W1216 06:40:45.355672 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:45.355691 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:45.355723 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:45.423762 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:45.423783 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:45.451985 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:45.452002 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:45.516593 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:45.516613 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:45.531478 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:45.531500 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:45.596800 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:45.588341   12868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:45.588774   12868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:45.590507   12868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:45.590979   12868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:45.592493   12868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:45.588341   12868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:45.588774   12868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:45.590507   12868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:45.590979   12868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:45.592493   12868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:48.098483 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:48.108786 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:48.108849 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:48.134211 1639474 cri.go:89] found id: ""
	I1216 06:40:48.134225 1639474 logs.go:282] 0 containers: []
	W1216 06:40:48.134232 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:48.134237 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:48.134297 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:48.160517 1639474 cri.go:89] found id: ""
	I1216 06:40:48.160531 1639474 logs.go:282] 0 containers: []
	W1216 06:40:48.160538 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:48.160544 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:48.160604 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:48.185669 1639474 cri.go:89] found id: ""
	I1216 06:40:48.185682 1639474 logs.go:282] 0 containers: []
	W1216 06:40:48.185690 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:48.185694 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:48.185754 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:48.210265 1639474 cri.go:89] found id: ""
	I1216 06:40:48.210279 1639474 logs.go:282] 0 containers: []
	W1216 06:40:48.210286 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:48.210291 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:48.210403 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:48.234252 1639474 cri.go:89] found id: ""
	I1216 06:40:48.234267 1639474 logs.go:282] 0 containers: []
	W1216 06:40:48.234274 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:48.234279 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:48.234339 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:48.259358 1639474 cri.go:89] found id: ""
	I1216 06:40:48.259372 1639474 logs.go:282] 0 containers: []
	W1216 06:40:48.259379 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:48.259384 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:48.259443 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:48.288697 1639474 cri.go:89] found id: ""
	I1216 06:40:48.288713 1639474 logs.go:282] 0 containers: []
	W1216 06:40:48.288720 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:48.288728 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:48.288738 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:48.357686 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:48.357712 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:48.372954 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:48.372973 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:48.434679 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:48.426723   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:48.427402   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:48.428895   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:48.429341   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:48.430781   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:48.426723   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:48.427402   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:48.428895   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:48.429341   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:48.430781   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:48.434689 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:48.434701 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:48.505103 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:48.505127 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:51.033411 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:51.043540 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:51.043600 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:51.070010 1639474 cri.go:89] found id: ""
	I1216 06:40:51.070025 1639474 logs.go:282] 0 containers: []
	W1216 06:40:51.070032 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:51.070037 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:51.070100 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:51.096267 1639474 cri.go:89] found id: ""
	I1216 06:40:51.096282 1639474 logs.go:282] 0 containers: []
	W1216 06:40:51.096290 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:51.096295 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:51.096356 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:51.122692 1639474 cri.go:89] found id: ""
	I1216 06:40:51.122707 1639474 logs.go:282] 0 containers: []
	W1216 06:40:51.122714 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:51.122719 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:51.122784 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:51.152647 1639474 cri.go:89] found id: ""
	I1216 06:40:51.152662 1639474 logs.go:282] 0 containers: []
	W1216 06:40:51.152670 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:51.152680 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:51.152744 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:51.180574 1639474 cri.go:89] found id: ""
	I1216 06:40:51.180589 1639474 logs.go:282] 0 containers: []
	W1216 06:40:51.180597 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:51.180602 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:51.180668 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:51.206605 1639474 cri.go:89] found id: ""
	I1216 06:40:51.206619 1639474 logs.go:282] 0 containers: []
	W1216 06:40:51.206626 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:51.206631 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:51.206695 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:51.231786 1639474 cri.go:89] found id: ""
	I1216 06:40:51.231809 1639474 logs.go:282] 0 containers: []
	W1216 06:40:51.231817 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:51.231825 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:51.231835 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:51.297100 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:51.297120 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:51.311954 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:51.311972 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:51.379683 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:51.371735   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:51.372335   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:51.373907   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:51.374265   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:51.375750   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:51.371735   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:51.372335   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:51.373907   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:51.374265   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:51.375750   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:51.379694 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:51.379706 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:51.447537 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:51.447557 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:53.983520 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:53.993929 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:53.993987 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:54.023619 1639474 cri.go:89] found id: ""
	I1216 06:40:54.023634 1639474 logs.go:282] 0 containers: []
	W1216 06:40:54.023640 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:54.023645 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:54.023708 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:54.049842 1639474 cri.go:89] found id: ""
	I1216 06:40:54.049857 1639474 logs.go:282] 0 containers: []
	W1216 06:40:54.049864 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:54.049869 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:54.049934 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:54.077181 1639474 cri.go:89] found id: ""
	I1216 06:40:54.077205 1639474 logs.go:282] 0 containers: []
	W1216 06:40:54.077212 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:54.077217 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:54.077280 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:54.105267 1639474 cri.go:89] found id: ""
	I1216 06:40:54.105282 1639474 logs.go:282] 0 containers: []
	W1216 06:40:54.105291 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:54.105297 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:54.105363 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:54.130851 1639474 cri.go:89] found id: ""
	I1216 06:40:54.130874 1639474 logs.go:282] 0 containers: []
	W1216 06:40:54.130881 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:54.130886 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:54.130949 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:54.156895 1639474 cri.go:89] found id: ""
	I1216 06:40:54.156910 1639474 logs.go:282] 0 containers: []
	W1216 06:40:54.156917 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:54.156923 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:54.156983 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:54.183545 1639474 cri.go:89] found id: ""
	I1216 06:40:54.183560 1639474 logs.go:282] 0 containers: []
	W1216 06:40:54.183566 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:54.183574 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:54.183584 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:54.249489 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:54.249509 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:54.263930 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:54.263947 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:54.329743 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:54.321698   13175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:54.322538   13175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:54.324144   13175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:54.324622   13175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:54.326115   13175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:54.321698   13175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:54.322538   13175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:54.324144   13175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:54.324622   13175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:54.326115   13175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:54.329755 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:54.329766 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:54.396582 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:54.396603 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:56.928591 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:56.939856 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:56.939917 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:56.967210 1639474 cri.go:89] found id: ""
	I1216 06:40:56.967225 1639474 logs.go:282] 0 containers: []
	W1216 06:40:56.967232 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:56.967237 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:56.967298 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:56.993815 1639474 cri.go:89] found id: ""
	I1216 06:40:56.993829 1639474 logs.go:282] 0 containers: []
	W1216 06:40:56.993836 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:56.993841 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:56.993898 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:57.029670 1639474 cri.go:89] found id: ""
	I1216 06:40:57.029684 1639474 logs.go:282] 0 containers: []
	W1216 06:40:57.029691 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:57.029696 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:57.029754 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:57.054833 1639474 cri.go:89] found id: ""
	I1216 06:40:57.054847 1639474 logs.go:282] 0 containers: []
	W1216 06:40:57.054854 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:57.054859 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:57.054924 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:57.079670 1639474 cri.go:89] found id: ""
	I1216 06:40:57.079684 1639474 logs.go:282] 0 containers: []
	W1216 06:40:57.079691 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:57.079696 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:57.079761 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:57.104048 1639474 cri.go:89] found id: ""
	I1216 06:40:57.104062 1639474 logs.go:282] 0 containers: []
	W1216 06:40:57.104069 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:57.104074 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:57.104142 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:57.129442 1639474 cri.go:89] found id: ""
	I1216 06:40:57.129462 1639474 logs.go:282] 0 containers: []
	W1216 06:40:57.129469 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:57.129477 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:57.129487 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:57.197165 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:57.197185 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:57.226479 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:57.226498 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:57.292031 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:57.292053 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:57.306889 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:57.306905 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:57.372214 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:57.363236   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:57.363924   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:57.365612   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:57.366208   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:57.367883   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:57.363236   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:57.363924   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:57.365612   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:57.366208   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:57.367883   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:59.872521 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:59.882455 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:59.882521 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:59.913998 1639474 cri.go:89] found id: ""
	I1216 06:40:59.914012 1639474 logs.go:282] 0 containers: []
	W1216 06:40:59.914020 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:59.914025 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:59.914091 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:59.942569 1639474 cri.go:89] found id: ""
	I1216 06:40:59.942583 1639474 logs.go:282] 0 containers: []
	W1216 06:40:59.942589 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:59.942594 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:59.942665 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:59.970700 1639474 cri.go:89] found id: ""
	I1216 06:40:59.970729 1639474 logs.go:282] 0 containers: []
	W1216 06:40:59.970736 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:59.970742 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:59.970809 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:59.997067 1639474 cri.go:89] found id: ""
	I1216 06:40:59.997085 1639474 logs.go:282] 0 containers: []
	W1216 06:40:59.997092 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:59.997098 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:59.997163 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:00.191988 1639474 cri.go:89] found id: ""
	I1216 06:41:00.192005 1639474 logs.go:282] 0 containers: []
	W1216 06:41:00.192013 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:00.192018 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:00.192086 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:00.277353 1639474 cri.go:89] found id: ""
	I1216 06:41:00.277369 1639474 logs.go:282] 0 containers: []
	W1216 06:41:00.277377 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:00.277382 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:00.277497 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:00.317655 1639474 cri.go:89] found id: ""
	I1216 06:41:00.317680 1639474 logs.go:282] 0 containers: []
	W1216 06:41:00.317688 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:00.317697 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:00.317710 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:00.373222 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:00.373244 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:00.450289 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:00.450312 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:00.467305 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:00.467321 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:00.537520 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:00.528959   13394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:00.529630   13394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:00.531328   13394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:00.531879   13394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:00.533548   13394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:00.528959   13394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:00.529630   13394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:00.531328   13394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:00.531879   13394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:00.533548   13394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:00.537529 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:00.537544 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:03.105837 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:03.116211 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:03.116271 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:03.140992 1639474 cri.go:89] found id: ""
	I1216 06:41:03.141005 1639474 logs.go:282] 0 containers: []
	W1216 06:41:03.141013 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:03.141018 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:03.141077 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:03.169832 1639474 cri.go:89] found id: ""
	I1216 06:41:03.169846 1639474 logs.go:282] 0 containers: []
	W1216 06:41:03.169853 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:03.169858 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:03.169923 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:03.200294 1639474 cri.go:89] found id: ""
	I1216 06:41:03.200308 1639474 logs.go:282] 0 containers: []
	W1216 06:41:03.200316 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:03.200321 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:03.200422 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:03.226615 1639474 cri.go:89] found id: ""
	I1216 06:41:03.226629 1639474 logs.go:282] 0 containers: []
	W1216 06:41:03.226635 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:03.226641 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:03.226702 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:03.252099 1639474 cri.go:89] found id: ""
	I1216 06:41:03.252113 1639474 logs.go:282] 0 containers: []
	W1216 06:41:03.252120 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:03.252125 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:03.252186 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:03.277049 1639474 cri.go:89] found id: ""
	I1216 06:41:03.277064 1639474 logs.go:282] 0 containers: []
	W1216 06:41:03.277070 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:03.277075 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:03.277136 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:03.302834 1639474 cri.go:89] found id: ""
	I1216 06:41:03.302850 1639474 logs.go:282] 0 containers: []
	W1216 06:41:03.302857 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:03.302865 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:03.302877 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:03.369696 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:03.369719 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:03.384336 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:03.384358 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:03.450962 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:03.442704   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:03.443315   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:03.445009   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:03.445434   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:03.446924   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:03.442704   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:03.443315   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:03.445009   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:03.445434   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:03.446924   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:03.450973 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:03.450985 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:03.522274 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:03.522297 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:06.053196 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:06.063351 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:06.063422 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:06.089075 1639474 cri.go:89] found id: ""
	I1216 06:41:06.089089 1639474 logs.go:282] 0 containers: []
	W1216 06:41:06.089096 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:06.089102 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:06.089162 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:06.118245 1639474 cri.go:89] found id: ""
	I1216 06:41:06.118259 1639474 logs.go:282] 0 containers: []
	W1216 06:41:06.118266 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:06.118271 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:06.118336 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:06.143697 1639474 cri.go:89] found id: ""
	I1216 06:41:06.143724 1639474 logs.go:282] 0 containers: []
	W1216 06:41:06.143732 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:06.143737 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:06.143805 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:06.169572 1639474 cri.go:89] found id: ""
	I1216 06:41:06.169586 1639474 logs.go:282] 0 containers: []
	W1216 06:41:06.169594 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:06.169599 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:06.169661 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:06.195851 1639474 cri.go:89] found id: ""
	I1216 06:41:06.195867 1639474 logs.go:282] 0 containers: []
	W1216 06:41:06.195874 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:06.195879 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:06.195942 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:06.223692 1639474 cri.go:89] found id: ""
	I1216 06:41:06.223707 1639474 logs.go:282] 0 containers: []
	W1216 06:41:06.223715 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:06.223720 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:06.223780 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:06.249649 1639474 cri.go:89] found id: ""
	I1216 06:41:06.249679 1639474 logs.go:282] 0 containers: []
	W1216 06:41:06.249686 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:06.249694 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:06.249705 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:06.314738 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:06.314759 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:06.329678 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:06.329695 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:06.395023 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:06.386200   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:06.387084   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:06.388942   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:06.389302   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:06.390896   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:06.386200   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:06.387084   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:06.388942   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:06.389302   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:06.390896   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:06.395034 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:06.395046 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:06.463667 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:06.463687 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:08.992603 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:09.003856 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:09.003937 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:09.031578 1639474 cri.go:89] found id: ""
	I1216 06:41:09.031592 1639474 logs.go:282] 0 containers: []
	W1216 06:41:09.031599 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:09.031604 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:09.031663 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:09.056946 1639474 cri.go:89] found id: ""
	I1216 06:41:09.056961 1639474 logs.go:282] 0 containers: []
	W1216 06:41:09.056969 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:09.056974 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:09.057035 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:09.082038 1639474 cri.go:89] found id: ""
	I1216 06:41:09.082053 1639474 logs.go:282] 0 containers: []
	W1216 06:41:09.082060 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:09.082065 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:09.082125 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:09.107847 1639474 cri.go:89] found id: ""
	I1216 06:41:09.107862 1639474 logs.go:282] 0 containers: []
	W1216 06:41:09.107869 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:09.107874 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:09.107933 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:09.133995 1639474 cri.go:89] found id: ""
	I1216 06:41:09.134010 1639474 logs.go:282] 0 containers: []
	W1216 06:41:09.134017 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:09.134022 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:09.134086 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:09.159110 1639474 cri.go:89] found id: ""
	I1216 06:41:09.159125 1639474 logs.go:282] 0 containers: []
	W1216 06:41:09.159132 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:09.159137 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:09.159197 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:09.189150 1639474 cri.go:89] found id: ""
	I1216 06:41:09.189164 1639474 logs.go:282] 0 containers: []
	W1216 06:41:09.189171 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:09.189179 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:09.189190 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:09.251080 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:09.242202   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:09.242596   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:09.244208   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:09.244880   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:09.246572   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:09.242202   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:09.242596   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:09.244208   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:09.244880   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:09.246572   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:09.251090 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:09.251102 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:09.318859 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:09.318879 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:09.349358 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:09.349381 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:09.418362 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:09.418385 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:11.933431 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:11.944248 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:11.944309 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:11.976909 1639474 cri.go:89] found id: ""
	I1216 06:41:11.976924 1639474 logs.go:282] 0 containers: []
	W1216 06:41:11.976932 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:11.976937 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:11.976998 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:12.011035 1639474 cri.go:89] found id: ""
	I1216 06:41:12.011050 1639474 logs.go:282] 0 containers: []
	W1216 06:41:12.011057 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:12.011062 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:12.011126 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:12.041473 1639474 cri.go:89] found id: ""
	I1216 06:41:12.041495 1639474 logs.go:282] 0 containers: []
	W1216 06:41:12.041502 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:12.041508 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:12.041571 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:12.066438 1639474 cri.go:89] found id: ""
	I1216 06:41:12.066463 1639474 logs.go:282] 0 containers: []
	W1216 06:41:12.066471 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:12.066477 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:12.066542 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:12.090884 1639474 cri.go:89] found id: ""
	I1216 06:41:12.090899 1639474 logs.go:282] 0 containers: []
	W1216 06:41:12.090906 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:12.090911 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:12.090970 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:12.116491 1639474 cri.go:89] found id: ""
	I1216 06:41:12.116506 1639474 logs.go:282] 0 containers: []
	W1216 06:41:12.116516 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:12.116522 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:12.116580 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:12.142941 1639474 cri.go:89] found id: ""
	I1216 06:41:12.142956 1639474 logs.go:282] 0 containers: []
	W1216 06:41:12.142963 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:12.142971 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:12.142982 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:12.172125 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:12.172142 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:12.240713 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:12.240734 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:12.255672 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:12.255689 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:12.321167 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:12.312200   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:12.313096   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:12.315001   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:12.315663   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:12.317261   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:12.312200   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:12.313096   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:12.315001   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:12.315663   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:12.317261   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:12.321177 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:12.321190 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:14.894286 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:14.904324 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:14.904383 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:14.938397 1639474 cri.go:89] found id: ""
	I1216 06:41:14.938421 1639474 logs.go:282] 0 containers: []
	W1216 06:41:14.938429 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:14.938434 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:14.938501 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:14.967116 1639474 cri.go:89] found id: ""
	I1216 06:41:14.967130 1639474 logs.go:282] 0 containers: []
	W1216 06:41:14.967137 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:14.967141 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:14.967203 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:14.993300 1639474 cri.go:89] found id: ""
	I1216 06:41:14.993324 1639474 logs.go:282] 0 containers: []
	W1216 06:41:14.993331 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:14.993336 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:14.993414 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:15.065324 1639474 cri.go:89] found id: ""
	I1216 06:41:15.065347 1639474 logs.go:282] 0 containers: []
	W1216 06:41:15.065374 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:15.065379 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:15.065453 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:15.094230 1639474 cri.go:89] found id: ""
	I1216 06:41:15.094254 1639474 logs.go:282] 0 containers: []
	W1216 06:41:15.094262 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:15.094268 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:15.094334 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:15.125543 1639474 cri.go:89] found id: ""
	I1216 06:41:15.125557 1639474 logs.go:282] 0 containers: []
	W1216 06:41:15.125567 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:15.125574 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:15.125641 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:15.153256 1639474 cri.go:89] found id: ""
	I1216 06:41:15.153271 1639474 logs.go:282] 0 containers: []
	W1216 06:41:15.153280 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:15.153287 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:15.153298 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:15.220613 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:15.220633 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:15.235620 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:15.235637 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:15.298217 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:15.289454   13906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:15.290253   13906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:15.291923   13906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:15.292609   13906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:15.294226   13906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:15.289454   13906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:15.290253   13906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:15.291923   13906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:15.292609   13906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:15.294226   13906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:15.298227 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:15.298238 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:15.366620 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:15.366643 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:17.896595 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:17.908386 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:17.908446 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:17.937743 1639474 cri.go:89] found id: ""
	I1216 06:41:17.937757 1639474 logs.go:282] 0 containers: []
	W1216 06:41:17.937763 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:17.937768 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:17.937827 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:17.970369 1639474 cri.go:89] found id: ""
	I1216 06:41:17.970383 1639474 logs.go:282] 0 containers: []
	W1216 06:41:17.970390 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:17.970395 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:17.970453 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:17.996832 1639474 cri.go:89] found id: ""
	I1216 06:41:17.996846 1639474 logs.go:282] 0 containers: []
	W1216 06:41:17.996853 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:17.996858 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:17.996924 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:18.038145 1639474 cri.go:89] found id: ""
	I1216 06:41:18.038159 1639474 logs.go:282] 0 containers: []
	W1216 06:41:18.038167 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:18.038172 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:18.038235 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:18.064225 1639474 cri.go:89] found id: ""
	I1216 06:41:18.064239 1639474 logs.go:282] 0 containers: []
	W1216 06:41:18.064248 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:18.064254 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:18.064314 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:18.094775 1639474 cri.go:89] found id: ""
	I1216 06:41:18.094789 1639474 logs.go:282] 0 containers: []
	W1216 06:41:18.094797 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:18.094802 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:18.094863 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:18.120874 1639474 cri.go:89] found id: ""
	I1216 06:41:18.120888 1639474 logs.go:282] 0 containers: []
	W1216 06:41:18.120895 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:18.120903 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:18.120913 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:18.188407 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:18.188429 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:18.221279 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:18.221295 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:18.288107 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:18.288129 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:18.303324 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:18.303342 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:18.371049 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:18.362924   14025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:18.363610   14025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:18.365170   14025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:18.365582   14025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:18.367111   14025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:18.362924   14025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:18.363610   14025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:18.365170   14025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:18.365582   14025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:18.367111   14025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:20.871320 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:20.881458 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:20.881519 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:20.910690 1639474 cri.go:89] found id: ""
	I1216 06:41:20.910704 1639474 logs.go:282] 0 containers: []
	W1216 06:41:20.910711 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:20.910716 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:20.910778 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:20.940115 1639474 cri.go:89] found id: ""
	I1216 06:41:20.940131 1639474 logs.go:282] 0 containers: []
	W1216 06:41:20.940138 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:20.940144 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:20.940205 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:20.971890 1639474 cri.go:89] found id: ""
	I1216 06:41:20.971904 1639474 logs.go:282] 0 containers: []
	W1216 06:41:20.971911 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:20.971916 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:20.971973 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:20.997611 1639474 cri.go:89] found id: ""
	I1216 06:41:20.997627 1639474 logs.go:282] 0 containers: []
	W1216 06:41:20.997634 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:20.997639 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:20.997714 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:21.028905 1639474 cri.go:89] found id: ""
	I1216 06:41:21.028919 1639474 logs.go:282] 0 containers: []
	W1216 06:41:21.028926 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:21.028931 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:21.028990 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:21.055176 1639474 cri.go:89] found id: ""
	I1216 06:41:21.055190 1639474 logs.go:282] 0 containers: []
	W1216 06:41:21.055197 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:21.055202 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:21.055262 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:21.081697 1639474 cri.go:89] found id: ""
	I1216 06:41:21.081712 1639474 logs.go:282] 0 containers: []
	W1216 06:41:21.081719 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:21.081727 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:21.081738 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:21.148234 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:21.148255 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:21.164172 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:21.164192 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:21.228352 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:21.219814   14118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:21.220709   14118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:21.222449   14118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:21.222766   14118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:21.224337   14118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:21.219814   14118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:21.220709   14118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:21.222449   14118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:21.222766   14118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:21.224337   14118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:21.228362 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:21.228374 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:21.295358 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:21.295378 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:23.826021 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:23.836732 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:23.836794 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:23.865987 1639474 cri.go:89] found id: ""
	I1216 06:41:23.866001 1639474 logs.go:282] 0 containers: []
	W1216 06:41:23.866008 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:23.866013 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:23.866073 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:23.891393 1639474 cri.go:89] found id: ""
	I1216 06:41:23.891408 1639474 logs.go:282] 0 containers: []
	W1216 06:41:23.891415 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:23.891420 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:23.891486 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:23.918388 1639474 cri.go:89] found id: ""
	I1216 06:41:23.918403 1639474 logs.go:282] 0 containers: []
	W1216 06:41:23.918410 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:23.918415 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:23.918475 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:23.961374 1639474 cri.go:89] found id: ""
	I1216 06:41:23.961390 1639474 logs.go:282] 0 containers: []
	W1216 06:41:23.961397 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:23.961402 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:23.961461 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:23.987162 1639474 cri.go:89] found id: ""
	I1216 06:41:23.987176 1639474 logs.go:282] 0 containers: []
	W1216 06:41:23.987184 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:23.987195 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:23.987257 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:24.016111 1639474 cri.go:89] found id: ""
	I1216 06:41:24.016127 1639474 logs.go:282] 0 containers: []
	W1216 06:41:24.016134 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:24.016139 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:24.016202 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:24.043481 1639474 cri.go:89] found id: ""
	I1216 06:41:24.043495 1639474 logs.go:282] 0 containers: []
	W1216 06:41:24.043503 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:24.043511 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:24.043521 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:24.111316 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:24.102100   14216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:24.103028   14216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:24.105013   14216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:24.105610   14216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:24.107298   14216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:24.102100   14216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:24.103028   14216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:24.105013   14216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:24.105610   14216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:24.107298   14216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:24.111326 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:24.111338 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:24.178630 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:24.178650 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:24.213388 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:24.213405 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:24.283269 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:24.283290 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:26.798616 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:26.808720 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:26.808786 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:26.834419 1639474 cri.go:89] found id: ""
	I1216 06:41:26.834433 1639474 logs.go:282] 0 containers: []
	W1216 06:41:26.834451 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:26.834457 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:26.834530 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:26.860230 1639474 cri.go:89] found id: ""
	I1216 06:41:26.860244 1639474 logs.go:282] 0 containers: []
	W1216 06:41:26.860251 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:26.860256 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:26.860316 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:26.886841 1639474 cri.go:89] found id: ""
	I1216 06:41:26.886856 1639474 logs.go:282] 0 containers: []
	W1216 06:41:26.886863 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:26.886868 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:26.886934 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:26.933097 1639474 cri.go:89] found id: ""
	I1216 06:41:26.933121 1639474 logs.go:282] 0 containers: []
	W1216 06:41:26.933129 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:26.933134 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:26.933201 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:26.967219 1639474 cri.go:89] found id: ""
	I1216 06:41:26.967233 1639474 logs.go:282] 0 containers: []
	W1216 06:41:26.967241 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:26.967258 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:26.967319 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:27.008045 1639474 cri.go:89] found id: ""
	I1216 06:41:27.008074 1639474 logs.go:282] 0 containers: []
	W1216 06:41:27.008082 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:27.008088 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:27.008156 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:27.034453 1639474 cri.go:89] found id: ""
	I1216 06:41:27.034469 1639474 logs.go:282] 0 containers: []
	W1216 06:41:27.034476 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:27.034484 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:27.034507 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:27.104223 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:27.104245 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:27.119468 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:27.119487 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:27.188973 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:27.180080   14327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:27.181032   14327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:27.182948   14327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:27.183274   14327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:27.184949   14327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:27.180080   14327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:27.181032   14327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:27.182948   14327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:27.183274   14327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:27.184949   14327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:27.188983 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:27.188994 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:27.258008 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:27.258028 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:29.786955 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:29.797122 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:29.797184 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:29.824207 1639474 cri.go:89] found id: ""
	I1216 06:41:29.824221 1639474 logs.go:282] 0 containers: []
	W1216 06:41:29.824228 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:29.824233 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:29.824290 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:29.850615 1639474 cri.go:89] found id: ""
	I1216 06:41:29.850630 1639474 logs.go:282] 0 containers: []
	W1216 06:41:29.850636 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:29.850641 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:29.850703 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:29.876387 1639474 cri.go:89] found id: ""
	I1216 06:41:29.876401 1639474 logs.go:282] 0 containers: []
	W1216 06:41:29.876408 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:29.876413 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:29.876498 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:29.907653 1639474 cri.go:89] found id: ""
	I1216 06:41:29.907667 1639474 logs.go:282] 0 containers: []
	W1216 06:41:29.907674 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:29.907678 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:29.907735 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:29.944219 1639474 cri.go:89] found id: ""
	I1216 06:41:29.944233 1639474 logs.go:282] 0 containers: []
	W1216 06:41:29.944239 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:29.944244 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:29.944302 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:29.976007 1639474 cri.go:89] found id: ""
	I1216 06:41:29.976021 1639474 logs.go:282] 0 containers: []
	W1216 06:41:29.976029 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:29.976033 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:29.976095 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:30.024272 1639474 cri.go:89] found id: ""
	I1216 06:41:30.024289 1639474 logs.go:282] 0 containers: []
	W1216 06:41:30.024297 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:30.024306 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:30.024322 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:30.119806 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:30.119827 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:30.136379 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:30.136400 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:30.205690 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:30.196345   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:30.197016   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:30.198788   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:30.199535   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:30.201508   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:30.196345   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:30.197016   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:30.198788   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:30.199535   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:30.201508   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:30.205700 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:30.205723 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:30.274216 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:30.274240 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:32.809139 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:32.819371 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:32.819431 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:32.847039 1639474 cri.go:89] found id: ""
	I1216 06:41:32.847054 1639474 logs.go:282] 0 containers: []
	W1216 06:41:32.847065 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:32.847070 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:32.847138 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:32.875215 1639474 cri.go:89] found id: ""
	I1216 06:41:32.875229 1639474 logs.go:282] 0 containers: []
	W1216 06:41:32.875236 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:32.875240 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:32.875300 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:32.907300 1639474 cri.go:89] found id: ""
	I1216 06:41:32.907314 1639474 logs.go:282] 0 containers: []
	W1216 06:41:32.907321 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:32.907326 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:32.907381 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:32.938860 1639474 cri.go:89] found id: ""
	I1216 06:41:32.938874 1639474 logs.go:282] 0 containers: []
	W1216 06:41:32.938881 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:32.938886 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:32.938942 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:32.971352 1639474 cri.go:89] found id: ""
	I1216 06:41:32.971366 1639474 logs.go:282] 0 containers: []
	W1216 06:41:32.971374 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:32.971379 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:32.971436 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:33.012516 1639474 cri.go:89] found id: ""
	I1216 06:41:33.012531 1639474 logs.go:282] 0 containers: []
	W1216 06:41:33.012538 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:33.012543 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:33.012622 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:33.041830 1639474 cri.go:89] found id: ""
	I1216 06:41:33.041844 1639474 logs.go:282] 0 containers: []
	W1216 06:41:33.041851 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:33.041859 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:33.041869 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:33.107636 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:33.107656 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:33.122787 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:33.122803 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:33.191649 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:33.182880   14537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:33.183594   14537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:33.185187   14537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:33.185934   14537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:33.187632   14537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:33.182880   14537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:33.183594   14537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:33.185187   14537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:33.185934   14537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:33.187632   14537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:33.191659 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:33.191682 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:33.263447 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:33.263474 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:35.794998 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:35.805176 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:35.805236 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:35.831135 1639474 cri.go:89] found id: ""
	I1216 06:41:35.831149 1639474 logs.go:282] 0 containers: []
	W1216 06:41:35.831156 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:35.831161 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:35.831223 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:35.860254 1639474 cri.go:89] found id: ""
	I1216 06:41:35.860281 1639474 logs.go:282] 0 containers: []
	W1216 06:41:35.860289 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:35.860294 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:35.860360 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:35.887306 1639474 cri.go:89] found id: ""
	I1216 06:41:35.887320 1639474 logs.go:282] 0 containers: []
	W1216 06:41:35.887327 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:35.887333 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:35.887391 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:35.917653 1639474 cri.go:89] found id: ""
	I1216 06:41:35.917668 1639474 logs.go:282] 0 containers: []
	W1216 06:41:35.917690 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:35.917696 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:35.917763 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:35.959523 1639474 cri.go:89] found id: ""
	I1216 06:41:35.959546 1639474 logs.go:282] 0 containers: []
	W1216 06:41:35.959553 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:35.959558 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:35.959629 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:35.989044 1639474 cri.go:89] found id: ""
	I1216 06:41:35.989062 1639474 logs.go:282] 0 containers: []
	W1216 06:41:35.989069 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:35.989077 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:35.989138 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:36.024859 1639474 cri.go:89] found id: ""
	I1216 06:41:36.024875 1639474 logs.go:282] 0 containers: []
	W1216 06:41:36.024885 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:36.024895 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:36.024912 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:36.056878 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:36.056896 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:36.121811 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:36.121834 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:36.137437 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:36.137455 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:36.205908 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:36.196720   14657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:36.197549   14657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:36.199375   14657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:36.199759   14657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:36.201454   14657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:36.196720   14657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:36.197549   14657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:36.199375   14657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:36.199759   14657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:36.201454   14657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:36.205920 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:36.205931 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:38.776930 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:38.786842 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:38.786902 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:38.812622 1639474 cri.go:89] found id: ""
	I1216 06:41:38.812637 1639474 logs.go:282] 0 containers: []
	W1216 06:41:38.812644 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:38.812649 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:38.812705 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:38.838434 1639474 cri.go:89] found id: ""
	I1216 06:41:38.838448 1639474 logs.go:282] 0 containers: []
	W1216 06:41:38.838456 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:38.838461 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:38.838523 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:38.863392 1639474 cri.go:89] found id: ""
	I1216 06:41:38.863407 1639474 logs.go:282] 0 containers: []
	W1216 06:41:38.863414 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:38.863419 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:38.863479 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:38.888908 1639474 cri.go:89] found id: ""
	I1216 06:41:38.888922 1639474 logs.go:282] 0 containers: []
	W1216 06:41:38.888929 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:38.888934 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:38.888993 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:38.917217 1639474 cri.go:89] found id: ""
	I1216 06:41:38.917247 1639474 logs.go:282] 0 containers: []
	W1216 06:41:38.917255 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:38.917260 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:38.917340 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:38.951610 1639474 cri.go:89] found id: ""
	I1216 06:41:38.951623 1639474 logs.go:282] 0 containers: []
	W1216 06:41:38.951630 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:38.951645 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:38.951706 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:38.982144 1639474 cri.go:89] found id: ""
	I1216 06:41:38.982158 1639474 logs.go:282] 0 containers: []
	W1216 06:41:38.982165 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:38.982173 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:38.982184 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:39.051829 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:39.043703   14748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:39.044349   14748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:39.045933   14748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:39.046368   14748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:39.047868   14748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:39.043703   14748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:39.044349   14748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:39.045933   14748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:39.046368   14748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:39.047868   14748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:39.051839 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:39.051860 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:39.125701 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:39.125723 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:39.157087 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:39.157104 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:39.225477 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:39.225498 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:41.740919 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:41.751149 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:41.751211 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:41.776245 1639474 cri.go:89] found id: ""
	I1216 06:41:41.776259 1639474 logs.go:282] 0 containers: []
	W1216 06:41:41.776266 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:41.776271 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:41.776330 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:41.801530 1639474 cri.go:89] found id: ""
	I1216 06:41:41.801543 1639474 logs.go:282] 0 containers: []
	W1216 06:41:41.801556 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:41.801561 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:41.801619 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:41.826287 1639474 cri.go:89] found id: ""
	I1216 06:41:41.826300 1639474 logs.go:282] 0 containers: []
	W1216 06:41:41.826307 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:41.826312 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:41.826368 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:41.855404 1639474 cri.go:89] found id: ""
	I1216 06:41:41.855419 1639474 logs.go:282] 0 containers: []
	W1216 06:41:41.855426 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:41.855431 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:41.855490 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:41.883079 1639474 cri.go:89] found id: ""
	I1216 06:41:41.883093 1639474 logs.go:282] 0 containers: []
	W1216 06:41:41.883100 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:41.883104 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:41.883162 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:41.924362 1639474 cri.go:89] found id: ""
	I1216 06:41:41.924376 1639474 logs.go:282] 0 containers: []
	W1216 06:41:41.924393 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:41.924399 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:41.924503 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:41.958054 1639474 cri.go:89] found id: ""
	I1216 06:41:41.958069 1639474 logs.go:282] 0 containers: []
	W1216 06:41:41.958076 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:41.958083 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:41.958093 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:42.031093 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:42.022513   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:42.023465   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:42.024526   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:42.025029   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:42.026770   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:42.022513   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:42.023465   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:42.024526   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:42.025029   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:42.026770   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:42.031104 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:42.031117 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:42.098938 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:42.098961 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:42.132662 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:42.132681 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:42.206635 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:42.206658 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:44.725533 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:44.735690 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:44.735751 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:44.764539 1639474 cri.go:89] found id: ""
	I1216 06:41:44.764554 1639474 logs.go:282] 0 containers: []
	W1216 06:41:44.764561 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:44.764566 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:44.764624 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:44.789462 1639474 cri.go:89] found id: ""
	I1216 06:41:44.789476 1639474 logs.go:282] 0 containers: []
	W1216 06:41:44.789483 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:44.789487 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:44.789550 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:44.813863 1639474 cri.go:89] found id: ""
	I1216 06:41:44.813877 1639474 logs.go:282] 0 containers: []
	W1216 06:41:44.813884 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:44.813889 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:44.813948 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:44.842990 1639474 cri.go:89] found id: ""
	I1216 06:41:44.843006 1639474 logs.go:282] 0 containers: []
	W1216 06:41:44.843013 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:44.843018 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:44.843076 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:44.868986 1639474 cri.go:89] found id: ""
	I1216 06:41:44.869000 1639474 logs.go:282] 0 containers: []
	W1216 06:41:44.869006 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:44.869013 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:44.869070 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:44.897735 1639474 cri.go:89] found id: ""
	I1216 06:41:44.897759 1639474 logs.go:282] 0 containers: []
	W1216 06:41:44.897767 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:44.897773 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:44.897840 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:44.927690 1639474 cri.go:89] found id: ""
	I1216 06:41:44.927715 1639474 logs.go:282] 0 containers: []
	W1216 06:41:44.927722 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:44.927730 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:44.927740 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:45.002166 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:45.002190 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:45.029027 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:45.029047 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:45.167411 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:45.147237   14960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:45.148177   14960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:45.151868   14960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:45.153460   14960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:45.154056   14960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:45.147237   14960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:45.148177   14960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:45.151868   14960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:45.153460   14960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:45.154056   14960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:45.167428 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:45.167448 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:45.247049 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:45.247076 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:47.787199 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:47.797629 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:47.797694 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:47.822803 1639474 cri.go:89] found id: ""
	I1216 06:41:47.822818 1639474 logs.go:282] 0 containers: []
	W1216 06:41:47.822825 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:47.822830 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:47.822894 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:47.848082 1639474 cri.go:89] found id: ""
	I1216 06:41:47.848109 1639474 logs.go:282] 0 containers: []
	W1216 06:41:47.848117 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:47.848122 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:47.848199 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:47.874407 1639474 cri.go:89] found id: ""
	I1216 06:41:47.874421 1639474 logs.go:282] 0 containers: []
	W1216 06:41:47.874428 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:47.874434 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:47.874495 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:47.908568 1639474 cri.go:89] found id: ""
	I1216 06:41:47.908604 1639474 logs.go:282] 0 containers: []
	W1216 06:41:47.908611 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:47.908617 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:47.908685 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:47.942423 1639474 cri.go:89] found id: ""
	I1216 06:41:47.942438 1639474 logs.go:282] 0 containers: []
	W1216 06:41:47.942445 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:47.942450 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:47.942518 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:47.977188 1639474 cri.go:89] found id: ""
	I1216 06:41:47.977210 1639474 logs.go:282] 0 containers: []
	W1216 06:41:47.977218 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:47.977223 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:47.977302 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:48.011589 1639474 cri.go:89] found id: ""
	I1216 06:41:48.011604 1639474 logs.go:282] 0 containers: []
	W1216 06:41:48.011623 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:48.011637 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:48.011649 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:48.090336 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:48.090357 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:48.106676 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:48.106693 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:48.174952 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:48.165517   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:48.166169   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:48.168421   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:48.169339   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:48.170443   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:48.165517   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:48.166169   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:48.168421   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:48.169339   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:48.170443   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:48.174963 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:48.174975 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:48.244365 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:48.244386 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:50.777766 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:50.790374 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:50.790436 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:50.817848 1639474 cri.go:89] found id: ""
	I1216 06:41:50.817863 1639474 logs.go:282] 0 containers: []
	W1216 06:41:50.817870 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:50.817875 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:50.817947 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:50.848261 1639474 cri.go:89] found id: ""
	I1216 06:41:50.848277 1639474 logs.go:282] 0 containers: []
	W1216 06:41:50.848285 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:50.848290 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:50.848357 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:50.875745 1639474 cri.go:89] found id: ""
	I1216 06:41:50.875771 1639474 logs.go:282] 0 containers: []
	W1216 06:41:50.875779 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:50.875784 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:50.875857 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:50.908128 1639474 cri.go:89] found id: ""
	I1216 06:41:50.908142 1639474 logs.go:282] 0 containers: []
	W1216 06:41:50.908149 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:50.908154 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:50.908216 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:50.945866 1639474 cri.go:89] found id: ""
	I1216 06:41:50.945880 1639474 logs.go:282] 0 containers: []
	W1216 06:41:50.945897 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:50.945906 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:50.945988 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:50.976758 1639474 cri.go:89] found id: ""
	I1216 06:41:50.976772 1639474 logs.go:282] 0 containers: []
	W1216 06:41:50.976779 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:50.976790 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:50.976862 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:51.012047 1639474 cri.go:89] found id: ""
	I1216 06:41:51.012061 1639474 logs.go:282] 0 containers: []
	W1216 06:41:51.012080 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:51.012088 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:51.012099 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:51.079840 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:51.079863 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:51.095967 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:51.095984 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:51.168911 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:51.158269   15168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:51.159160   15168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:51.161023   15168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:51.161808   15168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:51.163880   15168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:51.158269   15168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:51.159160   15168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:51.161023   15168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:51.161808   15168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:51.163880   15168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:51.168920 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:51.168932 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:51.241258 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:51.241281 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:53.774859 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:53.785580 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:53.785647 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:53.815910 1639474 cri.go:89] found id: ""
	I1216 06:41:53.815946 1639474 logs.go:282] 0 containers: []
	W1216 06:41:53.815954 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:53.815960 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:53.816034 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:53.843197 1639474 cri.go:89] found id: ""
	I1216 06:41:53.843220 1639474 logs.go:282] 0 containers: []
	W1216 06:41:53.843228 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:53.843233 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:53.843303 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:53.869584 1639474 cri.go:89] found id: ""
	I1216 06:41:53.869598 1639474 logs.go:282] 0 containers: []
	W1216 06:41:53.869605 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:53.869610 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:53.869672 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:53.898126 1639474 cri.go:89] found id: ""
	I1216 06:41:53.898141 1639474 logs.go:282] 0 containers: []
	W1216 06:41:53.898148 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:53.898154 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:53.898217 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:53.935008 1639474 cri.go:89] found id: ""
	I1216 06:41:53.935022 1639474 logs.go:282] 0 containers: []
	W1216 06:41:53.935029 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:53.935033 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:53.935094 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:53.971715 1639474 cri.go:89] found id: ""
	I1216 06:41:53.971729 1639474 logs.go:282] 0 containers: []
	W1216 06:41:53.971740 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:53.971745 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:53.971827 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:54.004089 1639474 cri.go:89] found id: ""
	I1216 06:41:54.004107 1639474 logs.go:282] 0 containers: []
	W1216 06:41:54.004115 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:54.004138 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:54.004151 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:54.072434 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:54.072455 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:54.088417 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:54.088436 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:54.154720 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:54.146355   15274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:54.146923   15274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:54.148518   15274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:54.149322   15274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:54.150888   15274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:54.146355   15274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:54.146923   15274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:54.148518   15274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:54.149322   15274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:54.150888   15274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:54.154730 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:54.154741 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:54.223744 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:54.223763 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:56.753558 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:56.764118 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:56.764182 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:56.789865 1639474 cri.go:89] found id: ""
	I1216 06:41:56.789879 1639474 logs.go:282] 0 containers: []
	W1216 06:41:56.789886 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:56.789891 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:56.789954 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:56.815375 1639474 cri.go:89] found id: ""
	I1216 06:41:56.815390 1639474 logs.go:282] 0 containers: []
	W1216 06:41:56.815396 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:56.815401 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:56.815458 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:56.843367 1639474 cri.go:89] found id: ""
	I1216 06:41:56.843381 1639474 logs.go:282] 0 containers: []
	W1216 06:41:56.843389 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:56.843394 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:56.843453 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:56.869235 1639474 cri.go:89] found id: ""
	I1216 06:41:56.869249 1639474 logs.go:282] 0 containers: []
	W1216 06:41:56.869263 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:56.869268 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:56.869325 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:56.894296 1639474 cri.go:89] found id: ""
	I1216 06:41:56.894310 1639474 logs.go:282] 0 containers: []
	W1216 06:41:56.894318 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:56.894323 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:56.894393 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:56.930771 1639474 cri.go:89] found id: ""
	I1216 06:41:56.930786 1639474 logs.go:282] 0 containers: []
	W1216 06:41:56.930795 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:56.930800 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:56.930877 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:56.961829 1639474 cri.go:89] found id: ""
	I1216 06:41:56.961855 1639474 logs.go:282] 0 containers: []
	W1216 06:41:56.961862 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:56.961869 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:56.961880 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:56.982515 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:56.982532 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:57.053403 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:57.042504   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:57.043169   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:57.044928   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:57.047094   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:57.047728   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:57.042504   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:57.043169   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:57.044928   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:57.047094   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:57.047728   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:57.053413 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:57.053424 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:57.122315 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:57.122338 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:57.151668 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:57.151684 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:59.721370 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:59.731285 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:59.731355 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:59.759821 1639474 cri.go:89] found id: ""
	I1216 06:41:59.759835 1639474 logs.go:282] 0 containers: []
	W1216 06:41:59.759843 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:59.759848 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:59.759905 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:59.784708 1639474 cri.go:89] found id: ""
	I1216 06:41:59.784721 1639474 logs.go:282] 0 containers: []
	W1216 06:41:59.784728 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:59.784733 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:59.784791 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:59.810181 1639474 cri.go:89] found id: ""
	I1216 06:41:59.810196 1639474 logs.go:282] 0 containers: []
	W1216 06:41:59.810204 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:59.810209 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:59.810268 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:59.836051 1639474 cri.go:89] found id: ""
	I1216 06:41:59.836072 1639474 logs.go:282] 0 containers: []
	W1216 06:41:59.836082 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:59.836094 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:59.836177 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:59.860701 1639474 cri.go:89] found id: ""
	I1216 06:41:59.860714 1639474 logs.go:282] 0 containers: []
	W1216 06:41:59.860722 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:59.860727 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:59.860786 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:59.885062 1639474 cri.go:89] found id: ""
	I1216 06:41:59.885076 1639474 logs.go:282] 0 containers: []
	W1216 06:41:59.885092 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:59.885098 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:59.885154 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:59.926044 1639474 cri.go:89] found id: ""
	I1216 06:41:59.926058 1639474 logs.go:282] 0 containers: []
	W1216 06:41:59.926065 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:59.926073 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:59.926099 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:00.037850 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:59.990877   15478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:59.991479   15478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:59.993112   15478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:59.993660   15478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:59.995321   15478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:59.990877   15478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:59.991479   15478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:59.993112   15478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:59.993660   15478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:59.995321   15478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:00.037864 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:00.037877 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:00.264777 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:00.264802 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:00.361496 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:00.361518 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:00.460153 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:00.460175 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:02.976790 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:02.987102 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:02.987180 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:03.015111 1639474 cri.go:89] found id: ""
	I1216 06:42:03.015126 1639474 logs.go:282] 0 containers: []
	W1216 06:42:03.015133 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:03.015139 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:03.015202 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:03.040871 1639474 cri.go:89] found id: ""
	I1216 06:42:03.040903 1639474 logs.go:282] 0 containers: []
	W1216 06:42:03.040910 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:03.040915 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:03.040977 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:03.065726 1639474 cri.go:89] found id: ""
	I1216 06:42:03.065740 1639474 logs.go:282] 0 containers: []
	W1216 06:42:03.065748 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:03.065754 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:03.065813 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:03.090951 1639474 cri.go:89] found id: ""
	I1216 06:42:03.090966 1639474 logs.go:282] 0 containers: []
	W1216 06:42:03.090973 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:03.090979 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:03.091037 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:03.119521 1639474 cri.go:89] found id: ""
	I1216 06:42:03.119536 1639474 logs.go:282] 0 containers: []
	W1216 06:42:03.119543 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:03.119549 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:03.119615 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:03.147166 1639474 cri.go:89] found id: ""
	I1216 06:42:03.147181 1639474 logs.go:282] 0 containers: []
	W1216 06:42:03.147188 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:03.147193 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:03.147267 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:03.172021 1639474 cri.go:89] found id: ""
	I1216 06:42:03.172035 1639474 logs.go:282] 0 containers: []
	W1216 06:42:03.172042 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:03.172050 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:03.172060 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:03.186822 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:03.186838 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:03.250765 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:03.242422   15588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:03.243046   15588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:03.244675   15588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:03.245279   15588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:03.246834   15588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:03.242422   15588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:03.243046   15588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:03.244675   15588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:03.245279   15588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:03.246834   15588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:03.250775 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:03.250786 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:03.325562 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:03.325590 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:03.355074 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:03.355093 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:05.922524 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:05.932734 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:05.932804 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:05.960790 1639474 cri.go:89] found id: ""
	I1216 06:42:05.960804 1639474 logs.go:282] 0 containers: []
	W1216 06:42:05.960811 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:05.960816 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:05.960884 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:05.986356 1639474 cri.go:89] found id: ""
	I1216 06:42:05.986386 1639474 logs.go:282] 0 containers: []
	W1216 06:42:05.986394 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:05.986399 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:05.986458 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:06.015030 1639474 cri.go:89] found id: ""
	I1216 06:42:06.015046 1639474 logs.go:282] 0 containers: []
	W1216 06:42:06.015053 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:06.015058 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:06.015119 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:06.041009 1639474 cri.go:89] found id: ""
	I1216 06:42:06.041023 1639474 logs.go:282] 0 containers: []
	W1216 06:42:06.041030 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:06.041035 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:06.041091 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:06.068292 1639474 cri.go:89] found id: ""
	I1216 06:42:06.068306 1639474 logs.go:282] 0 containers: []
	W1216 06:42:06.068314 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:06.068319 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:06.068375 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:06.100555 1639474 cri.go:89] found id: ""
	I1216 06:42:06.100569 1639474 logs.go:282] 0 containers: []
	W1216 06:42:06.100576 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:06.100582 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:06.100642 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:06.132353 1639474 cri.go:89] found id: ""
	I1216 06:42:06.132367 1639474 logs.go:282] 0 containers: []
	W1216 06:42:06.132374 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:06.132382 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:06.132392 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:06.201249 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:06.192521   15689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:06.193141   15689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:06.194767   15689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:06.195329   15689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:06.197078   15689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:06.192521   15689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:06.193141   15689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:06.194767   15689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:06.195329   15689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:06.197078   15689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:06.201259 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:06.201271 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:06.271083 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:06.271102 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:06.300840 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:06.300857 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:06.369023 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:06.369043 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:08.885532 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:08.897655 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:08.897714 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:08.929123 1639474 cri.go:89] found id: ""
	I1216 06:42:08.929137 1639474 logs.go:282] 0 containers: []
	W1216 06:42:08.929144 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:08.929149 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:08.929216 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:08.969020 1639474 cri.go:89] found id: ""
	I1216 06:42:08.969036 1639474 logs.go:282] 0 containers: []
	W1216 06:42:08.969043 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:08.969049 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:08.969107 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:08.995554 1639474 cri.go:89] found id: ""
	I1216 06:42:08.995569 1639474 logs.go:282] 0 containers: []
	W1216 06:42:08.995577 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:08.995582 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:08.995642 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:09.023705 1639474 cri.go:89] found id: ""
	I1216 06:42:09.023720 1639474 logs.go:282] 0 containers: []
	W1216 06:42:09.023727 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:09.023732 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:09.023795 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:09.050243 1639474 cri.go:89] found id: ""
	I1216 06:42:09.050263 1639474 logs.go:282] 0 containers: []
	W1216 06:42:09.050270 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:09.050275 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:09.050332 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:09.075763 1639474 cri.go:89] found id: ""
	I1216 06:42:09.075778 1639474 logs.go:282] 0 containers: []
	W1216 06:42:09.075786 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:09.075791 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:09.075847 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:09.102027 1639474 cri.go:89] found id: ""
	I1216 06:42:09.102042 1639474 logs.go:282] 0 containers: []
	W1216 06:42:09.102050 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:09.102058 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:09.102072 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:09.131304 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:09.131322 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:09.197595 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:09.197616 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:09.214311 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:09.214329 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:09.280261 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:09.272137   15812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:09.272954   15812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:09.274571   15812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:09.274879   15812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:09.276370   15812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:09.272137   15812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:09.272954   15812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:09.274571   15812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:09.274879   15812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:09.276370   15812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:09.280272 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:09.280287 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:11.849647 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:11.859759 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:11.859820 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:11.885934 1639474 cri.go:89] found id: ""
	I1216 06:42:11.885948 1639474 logs.go:282] 0 containers: []
	W1216 06:42:11.885955 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:11.885960 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:11.886024 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:11.915333 1639474 cri.go:89] found id: ""
	I1216 06:42:11.915347 1639474 logs.go:282] 0 containers: []
	W1216 06:42:11.915354 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:11.915359 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:11.915420 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:11.958797 1639474 cri.go:89] found id: ""
	I1216 06:42:11.958811 1639474 logs.go:282] 0 containers: []
	W1216 06:42:11.958818 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:11.958823 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:11.958882 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:11.986843 1639474 cri.go:89] found id: ""
	I1216 06:42:11.986858 1639474 logs.go:282] 0 containers: []
	W1216 06:42:11.986865 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:11.986870 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:11.986928 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:12.016252 1639474 cri.go:89] found id: ""
	I1216 06:42:12.016268 1639474 logs.go:282] 0 containers: []
	W1216 06:42:12.016275 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:12.016280 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:12.016340 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:12.047250 1639474 cri.go:89] found id: ""
	I1216 06:42:12.047264 1639474 logs.go:282] 0 containers: []
	W1216 06:42:12.047271 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:12.047276 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:12.047334 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:12.073692 1639474 cri.go:89] found id: ""
	I1216 06:42:12.073706 1639474 logs.go:282] 0 containers: []
	W1216 06:42:12.073713 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:12.073721 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:12.073732 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:12.137759 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:12.129267   15900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:12.129895   15900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:12.131416   15900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:12.131890   15900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:12.133511   15900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:12.129267   15900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:12.129895   15900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:12.131416   15900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:12.131890   15900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:12.133511   15900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:12.137769 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:12.137780 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:12.206794 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:12.206815 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:12.235894 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:12.235910 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:12.304248 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:12.304267 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:14.819229 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:14.829519 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:14.829579 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:14.854644 1639474 cri.go:89] found id: ""
	I1216 06:42:14.854658 1639474 logs.go:282] 0 containers: []
	W1216 06:42:14.854665 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:14.854670 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:14.854744 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:14.879759 1639474 cri.go:89] found id: ""
	I1216 06:42:14.879774 1639474 logs.go:282] 0 containers: []
	W1216 06:42:14.879781 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:14.879785 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:14.879846 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:14.914620 1639474 cri.go:89] found id: ""
	I1216 06:42:14.914633 1639474 logs.go:282] 0 containers: []
	W1216 06:42:14.914640 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:14.914645 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:14.914706 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:14.949457 1639474 cri.go:89] found id: ""
	I1216 06:42:14.949470 1639474 logs.go:282] 0 containers: []
	W1216 06:42:14.949477 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:14.949482 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:14.949539 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:14.978393 1639474 cri.go:89] found id: ""
	I1216 06:42:14.978407 1639474 logs.go:282] 0 containers: []
	W1216 06:42:14.978414 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:14.978419 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:14.978485 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:15.059438 1639474 cri.go:89] found id: ""
	I1216 06:42:15.059454 1639474 logs.go:282] 0 containers: []
	W1216 06:42:15.059468 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:15.059474 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:15.059560 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:15.087173 1639474 cri.go:89] found id: ""
	I1216 06:42:15.087188 1639474 logs.go:282] 0 containers: []
	W1216 06:42:15.087194 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:15.087202 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:15.087212 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:15.157589 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:15.157610 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:15.187757 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:15.187774 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:15.256722 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:15.256742 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:15.271447 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:15.271464 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:15.332113 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:15.323890   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:15.324640   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:15.325693   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:15.326204   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:15.327840   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:15.323890   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:15.324640   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:15.325693   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:15.326204   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:15.327840   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:17.832401 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:17.842950 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:17.843012 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:17.871468 1639474 cri.go:89] found id: ""
	I1216 06:42:17.871483 1639474 logs.go:282] 0 containers: []
	W1216 06:42:17.871490 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:17.871496 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:17.871554 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:17.904274 1639474 cri.go:89] found id: ""
	I1216 06:42:17.904288 1639474 logs.go:282] 0 containers: []
	W1216 06:42:17.904295 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:17.904299 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:17.904355 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:17.936320 1639474 cri.go:89] found id: ""
	I1216 06:42:17.936334 1639474 logs.go:282] 0 containers: []
	W1216 06:42:17.936341 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:17.936346 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:17.936403 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:17.967750 1639474 cri.go:89] found id: ""
	I1216 06:42:17.967764 1639474 logs.go:282] 0 containers: []
	W1216 06:42:17.967771 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:17.967775 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:17.967833 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:17.993994 1639474 cri.go:89] found id: ""
	I1216 06:42:17.994008 1639474 logs.go:282] 0 containers: []
	W1216 06:42:17.994016 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:17.994021 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:17.994085 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:18.021367 1639474 cri.go:89] found id: ""
	I1216 06:42:18.021382 1639474 logs.go:282] 0 containers: []
	W1216 06:42:18.021390 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:18.021395 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:18.021463 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:18.052100 1639474 cri.go:89] found id: ""
	I1216 06:42:18.052115 1639474 logs.go:282] 0 containers: []
	W1216 06:42:18.052122 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:18.052130 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:18.052141 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:18.117261 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:18.117282 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:18.132219 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:18.132235 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:18.198118 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:18.189377   16116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:18.189937   16116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:18.191659   16116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:18.192181   16116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:18.193769   16116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:18.189377   16116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:18.189937   16116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:18.191659   16116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:18.192181   16116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:18.193769   16116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:18.198128 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:18.198139 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:18.265118 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:18.265138 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:20.794027 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:20.803718 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:20.803782 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:20.828191 1639474 cri.go:89] found id: ""
	I1216 06:42:20.828205 1639474 logs.go:282] 0 containers: []
	W1216 06:42:20.828212 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:20.828217 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:20.828278 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:20.853411 1639474 cri.go:89] found id: ""
	I1216 06:42:20.853425 1639474 logs.go:282] 0 containers: []
	W1216 06:42:20.853432 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:20.853437 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:20.853499 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:20.877825 1639474 cri.go:89] found id: ""
	I1216 06:42:20.877841 1639474 logs.go:282] 0 containers: []
	W1216 06:42:20.877848 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:20.877853 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:20.877908 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:20.910891 1639474 cri.go:89] found id: ""
	I1216 06:42:20.910904 1639474 logs.go:282] 0 containers: []
	W1216 06:42:20.910911 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:20.910916 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:20.910973 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:20.941025 1639474 cri.go:89] found id: ""
	I1216 06:42:20.941039 1639474 logs.go:282] 0 containers: []
	W1216 06:42:20.941045 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:20.941050 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:20.941108 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:20.973633 1639474 cri.go:89] found id: ""
	I1216 06:42:20.973647 1639474 logs.go:282] 0 containers: []
	W1216 06:42:20.973654 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:20.973659 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:20.973714 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:21.002805 1639474 cri.go:89] found id: ""
	I1216 06:42:21.002821 1639474 logs.go:282] 0 containers: []
	W1216 06:42:21.002828 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:21.002837 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:21.002849 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:21.068941 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:21.068961 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:21.083829 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:21.083853 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:21.147337 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:21.139664   16218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:21.140092   16218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:21.141644   16218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:21.141963   16218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:21.143475   16218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:21.139664   16218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:21.140092   16218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:21.141644   16218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:21.141963   16218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:21.143475   16218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:21.147347 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:21.147359 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:21.215583 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:21.215604 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:23.745376 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:23.755709 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:23.755771 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:23.781141 1639474 cri.go:89] found id: ""
	I1216 06:42:23.781155 1639474 logs.go:282] 0 containers: []
	W1216 06:42:23.781162 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:23.781168 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:23.781234 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:23.811661 1639474 cri.go:89] found id: ""
	I1216 06:42:23.811675 1639474 logs.go:282] 0 containers: []
	W1216 06:42:23.811683 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:23.811687 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:23.811745 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:23.837608 1639474 cri.go:89] found id: ""
	I1216 06:42:23.837623 1639474 logs.go:282] 0 containers: []
	W1216 06:42:23.837630 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:23.837635 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:23.837694 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:23.864015 1639474 cri.go:89] found id: ""
	I1216 06:42:23.864041 1639474 logs.go:282] 0 containers: []
	W1216 06:42:23.864051 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:23.864057 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:23.864124 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:23.889789 1639474 cri.go:89] found id: ""
	I1216 06:42:23.889806 1639474 logs.go:282] 0 containers: []
	W1216 06:42:23.889813 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:23.889818 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:23.889877 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:23.918576 1639474 cri.go:89] found id: ""
	I1216 06:42:23.918590 1639474 logs.go:282] 0 containers: []
	W1216 06:42:23.918598 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:23.918603 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:23.918661 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:23.950516 1639474 cri.go:89] found id: ""
	I1216 06:42:23.950531 1639474 logs.go:282] 0 containers: []
	W1216 06:42:23.950537 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:23.950545 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:23.950555 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:23.980911 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:23.980928 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:24.047333 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:24.047355 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:24.063020 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:24.063037 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:24.131565 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:24.123164   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:24.124006   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:24.125798   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:24.126123   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:24.127396   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:24.123164   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:24.124006   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:24.125798   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:24.126123   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:24.127396   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:24.131574 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:24.131593 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:26.704797 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:26.715064 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:26.715144 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:26.741016 1639474 cri.go:89] found id: ""
	I1216 06:42:26.741030 1639474 logs.go:282] 0 containers: []
	W1216 06:42:26.741037 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:26.741043 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:26.741102 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:26.771178 1639474 cri.go:89] found id: ""
	I1216 06:42:26.771192 1639474 logs.go:282] 0 containers: []
	W1216 06:42:26.771200 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:26.771205 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:26.771263 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:26.796426 1639474 cri.go:89] found id: ""
	I1216 06:42:26.796440 1639474 logs.go:282] 0 containers: []
	W1216 06:42:26.796447 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:26.796452 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:26.796530 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:26.822428 1639474 cri.go:89] found id: ""
	I1216 06:42:26.822444 1639474 logs.go:282] 0 containers: []
	W1216 06:42:26.822451 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:26.822456 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:26.822512 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:26.855530 1639474 cri.go:89] found id: ""
	I1216 06:42:26.855545 1639474 logs.go:282] 0 containers: []
	W1216 06:42:26.855552 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:26.855557 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:26.855617 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:26.880135 1639474 cri.go:89] found id: ""
	I1216 06:42:26.880149 1639474 logs.go:282] 0 containers: []
	W1216 06:42:26.880156 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:26.880161 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:26.880219 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:26.917307 1639474 cri.go:89] found id: ""
	I1216 06:42:26.917321 1639474 logs.go:282] 0 containers: []
	W1216 06:42:26.917327 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:26.917335 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:26.917347 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:26.997666 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:26.997690 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:27.033638 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:27.033662 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:27.104861 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:27.104880 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:27.119683 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:27.119699 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:27.187945 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:27.180063   16436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:27.180759   16436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:27.182494   16436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:27.183036   16436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:27.184032   16436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:27.180063   16436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:27.180759   16436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:27.182494   16436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:27.183036   16436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:27.184032   16436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:29.688270 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:29.698566 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:29.698629 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:29.724518 1639474 cri.go:89] found id: ""
	I1216 06:42:29.724532 1639474 logs.go:282] 0 containers: []
	W1216 06:42:29.724539 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:29.724544 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:29.724605 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:29.749436 1639474 cri.go:89] found id: ""
	I1216 06:42:29.749451 1639474 logs.go:282] 0 containers: []
	W1216 06:42:29.749458 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:29.749463 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:29.749525 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:29.774261 1639474 cri.go:89] found id: ""
	I1216 06:42:29.774276 1639474 logs.go:282] 0 containers: []
	W1216 06:42:29.774283 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:29.774290 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:29.774349 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:29.799455 1639474 cri.go:89] found id: ""
	I1216 06:42:29.799469 1639474 logs.go:282] 0 containers: []
	W1216 06:42:29.799478 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:29.799483 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:29.799541 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:29.823692 1639474 cri.go:89] found id: ""
	I1216 06:42:29.823707 1639474 logs.go:282] 0 containers: []
	W1216 06:42:29.823714 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:29.823718 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:29.823784 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:29.851131 1639474 cri.go:89] found id: ""
	I1216 06:42:29.851156 1639474 logs.go:282] 0 containers: []
	W1216 06:42:29.851164 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:29.851169 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:29.851239 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:29.875892 1639474 cri.go:89] found id: ""
	I1216 06:42:29.875906 1639474 logs.go:282] 0 containers: []
	W1216 06:42:29.875923 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:29.875931 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:29.875942 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:29.949752 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:29.949772 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:29.966843 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:29.966860 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:30.075177 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:30.040929   16528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:30.058446   16528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:30.059998   16528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:30.060413   16528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:30.066181   16528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:30.040929   16528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:30.058446   16528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:30.059998   16528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:30.060413   16528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:30.066181   16528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:30.075189 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:30.075201 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:30.153503 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:30.153525 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:32.683959 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:32.695552 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:32.695611 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:32.719250 1639474 cri.go:89] found id: ""
	I1216 06:42:32.719264 1639474 logs.go:282] 0 containers: []
	W1216 06:42:32.719271 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:32.719276 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:32.719335 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:32.744437 1639474 cri.go:89] found id: ""
	I1216 06:42:32.744451 1639474 logs.go:282] 0 containers: []
	W1216 06:42:32.744459 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:32.744464 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:32.744568 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:32.772181 1639474 cri.go:89] found id: ""
	I1216 06:42:32.772196 1639474 logs.go:282] 0 containers: []
	W1216 06:42:32.772204 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:32.772209 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:32.772273 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:32.799021 1639474 cri.go:89] found id: ""
	I1216 06:42:32.799035 1639474 logs.go:282] 0 containers: []
	W1216 06:42:32.799041 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:32.799046 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:32.799103 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:32.826452 1639474 cri.go:89] found id: ""
	I1216 06:42:32.826466 1639474 logs.go:282] 0 containers: []
	W1216 06:42:32.826473 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:32.826478 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:32.826535 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:32.854867 1639474 cri.go:89] found id: ""
	I1216 06:42:32.854881 1639474 logs.go:282] 0 containers: []
	W1216 06:42:32.854888 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:32.854893 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:32.854953 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:32.883584 1639474 cri.go:89] found id: ""
	I1216 06:42:32.883608 1639474 logs.go:282] 0 containers: []
	W1216 06:42:32.883615 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:32.883624 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:32.883635 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:32.969443 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:32.969472 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:33.000330 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:33.000354 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:33.068289 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:33.068311 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:33.083127 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:33.083145 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:33.154304 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:33.145404   16644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:33.146150   16644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:33.147831   16644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:33.148385   16644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:33.150308   16644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:33.145404   16644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:33.146150   16644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:33.147831   16644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:33.148385   16644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:33.150308   16644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:35.655139 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:35.665534 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:35.665616 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:35.691995 1639474 cri.go:89] found id: ""
	I1216 06:42:35.692009 1639474 logs.go:282] 0 containers: []
	W1216 06:42:35.692016 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:35.692021 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:35.692079 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:35.718728 1639474 cri.go:89] found id: ""
	I1216 06:42:35.718742 1639474 logs.go:282] 0 containers: []
	W1216 06:42:35.718748 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:35.718753 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:35.718812 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:35.743314 1639474 cri.go:89] found id: ""
	I1216 06:42:35.743328 1639474 logs.go:282] 0 containers: []
	W1216 06:42:35.743334 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:35.743339 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:35.743400 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:35.767871 1639474 cri.go:89] found id: ""
	I1216 06:42:35.767885 1639474 logs.go:282] 0 containers: []
	W1216 06:42:35.767893 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:35.767897 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:35.767958 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:35.791769 1639474 cri.go:89] found id: ""
	I1216 06:42:35.791783 1639474 logs.go:282] 0 containers: []
	W1216 06:42:35.791790 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:35.791795 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:35.791854 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:35.819002 1639474 cri.go:89] found id: ""
	I1216 06:42:35.819016 1639474 logs.go:282] 0 containers: []
	W1216 06:42:35.819023 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:35.819028 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:35.819083 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:35.843378 1639474 cri.go:89] found id: ""
	I1216 06:42:35.843392 1639474 logs.go:282] 0 containers: []
	W1216 06:42:35.843399 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:35.843407 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:35.843417 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:35.912874 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:35.912893 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:35.930936 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:35.930952 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:36.006314 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:35.994455   16736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:35.995222   16736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:35.996889   16736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:35.997269   16736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:35.999528   16736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:35.994455   16736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:35.995222   16736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:35.996889   16736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:35.997269   16736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:35.999528   16736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:36.006326 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:36.006338 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:36.080077 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:36.080099 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:38.612139 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:38.622353 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:38.622412 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:38.648583 1639474 cri.go:89] found id: ""
	I1216 06:42:38.648597 1639474 logs.go:282] 0 containers: []
	W1216 06:42:38.648604 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:38.648613 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:38.648671 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:38.674035 1639474 cri.go:89] found id: ""
	I1216 06:42:38.674049 1639474 logs.go:282] 0 containers: []
	W1216 06:42:38.674056 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:38.674061 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:38.674119 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:38.699213 1639474 cri.go:89] found id: ""
	I1216 06:42:38.699228 1639474 logs.go:282] 0 containers: []
	W1216 06:42:38.699234 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:38.699239 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:38.699294 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:38.723415 1639474 cri.go:89] found id: ""
	I1216 06:42:38.723429 1639474 logs.go:282] 0 containers: []
	W1216 06:42:38.723436 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:38.723441 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:38.723499 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:38.751059 1639474 cri.go:89] found id: ""
	I1216 06:42:38.751074 1639474 logs.go:282] 0 containers: []
	W1216 06:42:38.751081 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:38.751086 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:38.751146 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:38.779542 1639474 cri.go:89] found id: ""
	I1216 06:42:38.779557 1639474 logs.go:282] 0 containers: []
	W1216 06:42:38.779584 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:38.779589 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:38.779660 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:38.813466 1639474 cri.go:89] found id: ""
	I1216 06:42:38.813480 1639474 logs.go:282] 0 containers: []
	W1216 06:42:38.813488 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:38.813496 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:38.813507 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:38.842140 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:38.842158 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:38.908007 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:38.908027 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:38.923600 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:38.923618 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:38.995488 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:38.986888   16852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:38.987379   16852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:38.988908   16852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:38.989502   16852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:38.991340   16852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:38.986888   16852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:38.987379   16852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:38.988908   16852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:38.989502   16852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:38.991340   16852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:38.995498 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:38.995509 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:41.565694 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:41.575799 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:41.575860 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:41.600796 1639474 cri.go:89] found id: ""
	I1216 06:42:41.600811 1639474 logs.go:282] 0 containers: []
	W1216 06:42:41.600817 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:41.600822 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:41.600879 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:41.625792 1639474 cri.go:89] found id: ""
	I1216 06:42:41.625807 1639474 logs.go:282] 0 containers: []
	W1216 06:42:41.625814 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:41.625818 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:41.625875 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:41.650243 1639474 cri.go:89] found id: ""
	I1216 06:42:41.650257 1639474 logs.go:282] 0 containers: []
	W1216 06:42:41.650264 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:41.650269 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:41.650328 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:41.675889 1639474 cri.go:89] found id: ""
	I1216 06:42:41.675915 1639474 logs.go:282] 0 containers: []
	W1216 06:42:41.675923 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:41.675928 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:41.675993 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:41.703050 1639474 cri.go:89] found id: ""
	I1216 06:42:41.703064 1639474 logs.go:282] 0 containers: []
	W1216 06:42:41.703082 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:41.703088 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:41.703146 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:41.729269 1639474 cri.go:89] found id: ""
	I1216 06:42:41.729283 1639474 logs.go:282] 0 containers: []
	W1216 06:42:41.729293 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:41.729299 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:41.729369 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:41.753781 1639474 cri.go:89] found id: ""
	I1216 06:42:41.753796 1639474 logs.go:282] 0 containers: []
	W1216 06:42:41.753803 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:41.753811 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:41.753821 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:41.783522 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:41.783538 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:41.848274 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:41.848295 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:41.863600 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:41.863618 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:41.936160 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:41.927245   16955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:41.928139   16955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:41.929727   16955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:41.930266   16955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:41.931845   16955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:41.927245   16955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:41.928139   16955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:41.929727   16955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:41.930266   16955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:41.931845   16955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:41.936170 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:41.936181 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:44.511341 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:44.521587 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:44.521648 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:44.547007 1639474 cri.go:89] found id: ""
	I1216 06:42:44.547021 1639474 logs.go:282] 0 containers: []
	W1216 06:42:44.547028 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:44.547033 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:44.547096 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:44.572902 1639474 cri.go:89] found id: ""
	I1216 06:42:44.572917 1639474 logs.go:282] 0 containers: []
	W1216 06:42:44.572924 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:44.572928 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:44.572995 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:44.598645 1639474 cri.go:89] found id: ""
	I1216 06:42:44.598659 1639474 logs.go:282] 0 containers: []
	W1216 06:42:44.598667 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:44.598672 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:44.598731 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:44.627366 1639474 cri.go:89] found id: ""
	I1216 06:42:44.627381 1639474 logs.go:282] 0 containers: []
	W1216 06:42:44.627388 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:44.627396 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:44.627452 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:44.654294 1639474 cri.go:89] found id: ""
	I1216 06:42:44.654309 1639474 logs.go:282] 0 containers: []
	W1216 06:42:44.654319 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:44.654324 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:44.654382 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:44.679363 1639474 cri.go:89] found id: ""
	I1216 06:42:44.679378 1639474 logs.go:282] 0 containers: []
	W1216 06:42:44.679385 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:44.679392 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:44.679452 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:44.714760 1639474 cri.go:89] found id: ""
	I1216 06:42:44.714775 1639474 logs.go:282] 0 containers: []
	W1216 06:42:44.714781 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:44.714789 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:44.714800 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:44.779035 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:44.779055 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:44.793727 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:44.793745 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:44.860570 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:44.851694   17051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:44.852237   17051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:44.853933   17051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:44.854480   17051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:44.856105   17051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:44.851694   17051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:44.852237   17051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:44.853933   17051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:44.854480   17051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:44.856105   17051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:44.860581 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:44.860594 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:44.934290 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:44.934310 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:47.465385 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:47.475377 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:47.475436 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:47.503015 1639474 cri.go:89] found id: ""
	I1216 06:42:47.503042 1639474 logs.go:282] 0 containers: []
	W1216 06:42:47.503049 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:47.503055 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:47.503136 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:47.528903 1639474 cri.go:89] found id: ""
	I1216 06:42:47.528917 1639474 logs.go:282] 0 containers: []
	W1216 06:42:47.528924 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:47.528929 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:47.528989 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:47.554766 1639474 cri.go:89] found id: ""
	I1216 06:42:47.554781 1639474 logs.go:282] 0 containers: []
	W1216 06:42:47.554788 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:47.554792 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:47.554858 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:47.585092 1639474 cri.go:89] found id: ""
	I1216 06:42:47.585106 1639474 logs.go:282] 0 containers: []
	W1216 06:42:47.585113 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:47.585118 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:47.585214 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:47.610493 1639474 cri.go:89] found id: ""
	I1216 06:42:47.610508 1639474 logs.go:282] 0 containers: []
	W1216 06:42:47.610514 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:47.610519 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:47.610577 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:47.635340 1639474 cri.go:89] found id: ""
	I1216 06:42:47.635354 1639474 logs.go:282] 0 containers: []
	W1216 06:42:47.635361 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:47.635365 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:47.635424 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:47.661321 1639474 cri.go:89] found id: ""
	I1216 06:42:47.661335 1639474 logs.go:282] 0 containers: []
	W1216 06:42:47.661342 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:47.661349 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:47.661360 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:47.726879 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:47.726898 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:47.741659 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:47.741684 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:47.804784 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:47.796440   17154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:47.797188   17154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:47.798787   17154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:47.799294   17154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:47.800945   17154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:47.796440   17154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:47.797188   17154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:47.798787   17154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:47.799294   17154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:47.800945   17154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:47.804795 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:47.804807 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:47.871075 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:47.871096 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:50.410207 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:50.419946 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:50.420007 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:50.446668 1639474 cri.go:89] found id: ""
	I1216 06:42:50.446683 1639474 logs.go:282] 0 containers: []
	W1216 06:42:50.446689 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:50.446694 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:50.446753 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:50.471089 1639474 cri.go:89] found id: ""
	I1216 06:42:50.471119 1639474 logs.go:282] 0 containers: []
	W1216 06:42:50.471126 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:50.471131 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:50.471201 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:50.496821 1639474 cri.go:89] found id: ""
	I1216 06:42:50.496836 1639474 logs.go:282] 0 containers: []
	W1216 06:42:50.496843 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:50.496848 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:50.496906 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:50.522621 1639474 cri.go:89] found id: ""
	I1216 06:42:50.522647 1639474 logs.go:282] 0 containers: []
	W1216 06:42:50.522655 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:50.522660 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:50.522720 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:50.547813 1639474 cri.go:89] found id: ""
	I1216 06:42:50.547828 1639474 logs.go:282] 0 containers: []
	W1216 06:42:50.547847 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:50.547858 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:50.547926 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:50.573695 1639474 cri.go:89] found id: ""
	I1216 06:42:50.573709 1639474 logs.go:282] 0 containers: []
	W1216 06:42:50.573716 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:50.573734 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:50.573791 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:50.597701 1639474 cri.go:89] found id: ""
	I1216 06:42:50.597728 1639474 logs.go:282] 0 containers: []
	W1216 06:42:50.597735 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:50.597743 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:50.597754 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:50.634166 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:50.634183 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:50.700131 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:50.700152 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:50.714678 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:50.714694 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:50.782436 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:50.773358   17266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:50.773772   17266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:50.775550   17266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:50.775862   17266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:50.778084   17266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:50.773358   17266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:50.773772   17266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:50.775550   17266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:50.775862   17266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:50.778084   17266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:50.782446 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:50.782457 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:53.352592 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:53.362386 1639474 kubeadm.go:602] duration metric: took 4m3.23343297s to restartPrimaryControlPlane
	W1216 06:42:53.362440 1639474 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 06:42:53.362522 1639474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 06:42:53.770157 1639474 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 06:42:53.783560 1639474 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 06:42:53.791651 1639474 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 06:42:53.791714 1639474 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 06:42:53.800044 1639474 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 06:42:53.800054 1639474 kubeadm.go:158] found existing configuration files:
	
	I1216 06:42:53.800109 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 06:42:53.808053 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 06:42:53.808117 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 06:42:53.815698 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 06:42:53.823700 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 06:42:53.823760 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 06:42:53.831721 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 06:42:53.840020 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 06:42:53.840081 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 06:42:53.848003 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 06:42:53.856083 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 06:42:53.856151 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 06:42:53.863882 1639474 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 06:42:53.905755 1639474 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 06:42:53.905814 1639474 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:42:53.975149 1639474 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 06:42:53.975215 1639474 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 06:42:53.975250 1639474 kubeadm.go:319] OS: Linux
	I1216 06:42:53.975294 1639474 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 06:42:53.975341 1639474 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 06:42:53.975388 1639474 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 06:42:53.975435 1639474 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 06:42:53.975482 1639474 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 06:42:53.975528 1639474 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 06:42:53.975572 1639474 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 06:42:53.975619 1639474 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 06:42:53.975663 1639474 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 06:42:54.043340 1639474 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:42:54.043458 1639474 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:42:54.043554 1639474 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:42:54.051413 1639474 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:42:54.053411 1639474 out.go:252]   - Generating certificates and keys ...
	I1216 06:42:54.053534 1639474 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:42:54.053635 1639474 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:42:54.053726 1639474 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 06:42:54.053790 1639474 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 06:42:54.053864 1639474 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 06:42:54.053921 1639474 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 06:42:54.054179 1639474 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 06:42:54.054243 1639474 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 06:42:54.054338 1639474 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 06:42:54.054707 1639474 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 06:42:54.054967 1639474 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 06:42:54.055037 1639474 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:42:54.157358 1639474 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:42:54.374409 1639474 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:42:54.451048 1639474 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:42:54.729890 1639474 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:42:55.123905 1639474 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:42:55.124705 1639474 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:42:55.129362 1639474 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:42:55.130938 1639474 out.go:252]   - Booting up control plane ...
	I1216 06:42:55.131069 1639474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:42:55.131195 1639474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:42:55.132057 1639474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:42:55.147012 1639474 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:42:55.147116 1639474 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:42:55.155648 1639474 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:42:55.155999 1639474 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:42:55.156106 1639474 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:42:55.287137 1639474 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:42:55.287251 1639474 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:46:55.288217 1639474 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001159637s
	I1216 06:46:55.288243 1639474 kubeadm.go:319] 
	I1216 06:46:55.288304 1639474 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 06:46:55.288336 1639474 kubeadm.go:319] 	- The kubelet is not running
	I1216 06:46:55.288440 1639474 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 06:46:55.288445 1639474 kubeadm.go:319] 
	I1216 06:46:55.288565 1639474 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 06:46:55.288597 1639474 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 06:46:55.288627 1639474 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 06:46:55.288630 1639474 kubeadm.go:319] 
	I1216 06:46:55.292707 1639474 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 06:46:55.293173 1639474 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 06:46:55.293300 1639474 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 06:46:55.293545 1639474 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 06:46:55.293552 1639474 kubeadm.go:319] 
	I1216 06:46:55.293641 1639474 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1216 06:46:55.293765 1639474 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001159637s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 06:46:55.293855 1639474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 06:46:55.704413 1639474 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 06:46:55.717800 1639474 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 06:46:55.717860 1639474 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 06:46:55.726221 1639474 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 06:46:55.726230 1639474 kubeadm.go:158] found existing configuration files:
	
	I1216 06:46:55.726283 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 06:46:55.734520 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 06:46:55.734578 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 06:46:55.742443 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 06:46:55.750333 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 06:46:55.750396 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 06:46:55.758306 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 06:46:55.766326 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 06:46:55.766405 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 06:46:55.774041 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 06:46:55.782003 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 06:46:55.782061 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 06:46:55.789651 1639474 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 06:46:55.828645 1639474 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 06:46:55.828882 1639474 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:46:55.903247 1639474 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 06:46:55.903309 1639474 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 06:46:55.903344 1639474 kubeadm.go:319] OS: Linux
	I1216 06:46:55.903387 1639474 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 06:46:55.903435 1639474 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 06:46:55.903481 1639474 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 06:46:55.903528 1639474 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 06:46:55.903575 1639474 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 06:46:55.903627 1639474 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 06:46:55.903672 1639474 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 06:46:55.903719 1639474 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 06:46:55.903764 1639474 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 06:46:55.978404 1639474 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:46:55.978523 1639474 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:46:55.978635 1639474 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:46:55.988968 1639474 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:46:55.992562 1639474 out.go:252]   - Generating certificates and keys ...
	I1216 06:46:55.992651 1639474 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:46:55.992728 1639474 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:46:55.992809 1639474 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 06:46:55.992874 1639474 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 06:46:55.992948 1639474 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 06:46:55.993006 1639474 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 06:46:55.993073 1639474 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 06:46:55.993138 1639474 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 06:46:55.993217 1639474 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 06:46:55.993295 1639474 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 06:46:55.993334 1639474 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 06:46:55.993394 1639474 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:46:56.216895 1639474 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:46:56.479326 1639474 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:46:56.885081 1639474 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:46:57.284813 1639474 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:46:57.705019 1639474 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:46:57.705808 1639474 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:46:57.708929 1639474 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:46:57.712185 1639474 out.go:252]   - Booting up control plane ...
	I1216 06:46:57.712286 1639474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:46:57.712364 1639474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:46:57.713358 1639474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:46:57.728440 1639474 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:46:57.729026 1639474 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:46:57.736761 1639474 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:46:57.737279 1639474 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:46:57.737495 1639474 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:46:57.864121 1639474 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:46:57.864234 1639474 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:50:57.863911 1639474 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000152952s
	I1216 06:50:57.863934 1639474 kubeadm.go:319] 
	I1216 06:50:57.863990 1639474 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 06:50:57.864023 1639474 kubeadm.go:319] 	- The kubelet is not running
	I1216 06:50:57.864128 1639474 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 06:50:57.864133 1639474 kubeadm.go:319] 
	I1216 06:50:57.864236 1639474 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 06:50:57.864267 1639474 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 06:50:57.864298 1639474 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 06:50:57.864301 1639474 kubeadm.go:319] 
	I1216 06:50:57.868420 1639474 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 06:50:57.868920 1639474 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 06:50:57.869030 1639474 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 06:50:57.869291 1639474 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1216 06:50:57.869296 1639474 kubeadm.go:319] 
	I1216 06:50:57.869364 1639474 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 06:50:57.869421 1639474 kubeadm.go:403] duration metric: took 12m7.776167752s to StartCluster
	I1216 06:50:57.869453 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:50:57.869520 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:50:57.901135 1639474 cri.go:89] found id: ""
	I1216 06:50:57.901151 1639474 logs.go:282] 0 containers: []
	W1216 06:50:57.901158 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:50:57.901163 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:50:57.901226 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:50:57.925331 1639474 cri.go:89] found id: ""
	I1216 06:50:57.925345 1639474 logs.go:282] 0 containers: []
	W1216 06:50:57.925352 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:50:57.925357 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:50:57.925415 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:50:57.950341 1639474 cri.go:89] found id: ""
	I1216 06:50:57.950356 1639474 logs.go:282] 0 containers: []
	W1216 06:50:57.950363 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:50:57.950367 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:50:57.950426 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:50:57.975123 1639474 cri.go:89] found id: ""
	I1216 06:50:57.975137 1639474 logs.go:282] 0 containers: []
	W1216 06:50:57.975144 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:50:57.975149 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:50:57.975208 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:50:58.004659 1639474 cri.go:89] found id: ""
	I1216 06:50:58.004676 1639474 logs.go:282] 0 containers: []
	W1216 06:50:58.004684 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:50:58.004689 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:50:58.004760 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:50:58.030464 1639474 cri.go:89] found id: ""
	I1216 06:50:58.030478 1639474 logs.go:282] 0 containers: []
	W1216 06:50:58.030485 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:50:58.030491 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:50:58.030552 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:50:58.056049 1639474 cri.go:89] found id: ""
	I1216 06:50:58.056063 1639474 logs.go:282] 0 containers: []
	W1216 06:50:58.056071 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:50:58.056079 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:50:58.056091 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:50:58.124116 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:50:58.124137 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:50:58.139439 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:50:58.139455 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:50:58.229902 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:50:58.220695   21068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:50:58.221180   21068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:50:58.222906   21068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:50:58.223593   21068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:50:58.225247   21068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:50:58.220695   21068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:50:58.221180   21068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:50:58.222906   21068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:50:58.223593   21068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:50:58.225247   21068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:50:58.229914 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:50:58.229925 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:50:58.301956 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:50:58.301977 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1216 06:50:58.330306 1639474 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000152952s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1216 06:50:58.330348 1639474 out.go:285] * 
	W1216 06:50:58.330448 1639474 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000152952s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 06:50:58.330506 1639474 out.go:285] * 
	W1216 06:50:58.332927 1639474 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 06:50:58.338210 1639474 out.go:203] 
	W1216 06:50:58.341028 1639474 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000152952s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 06:50:58.341164 1639474 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 06:50:58.341212 1639474 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 06:50:58.344413 1639474 out.go:203] 
	
	
	==> CRI-O <==
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553471769Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553507896Z" level=info msg="Starting seccomp notifier watcher"
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553554657Z" level=info msg="Create NRI interface"
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553657485Z" level=info msg="built-in NRI default validator is disabled"
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553665107Z" level=info msg="runtime interface created"
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553674699Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553680746Z" level=info msg="runtime interface starting up..."
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553686137Z" level=info msg="starting plugins..."
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553698814Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553771561Z" level=info msg="No systemd watchdog enabled"
	Dec 16 06:38:48 functional-364120 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 16 06:42:54 functional-364120 crio[9872]: time="2025-12-16T06:42:54.046654305Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=2afa36a7-e595-4e9e-9866-100014f74db0 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:42:54 functional-364120 crio[9872]: time="2025-12-16T06:42:54.047561496Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=bfee085e-d788-43aa-852e-e818968557f8 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:42:54 functional-364120 crio[9872]: time="2025-12-16T06:42:54.048165668Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=8209edd3-2ad3-4cea-9d15-760a1b94c10d name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:42:54 functional-364120 crio[9872]: time="2025-12-16T06:42:54.048839782Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=f38b3b25-171e-488b-9dbb-3a4615d07ce7 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:42:54 functional-364120 crio[9872]: time="2025-12-16T06:42:54.049385123Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=674d3a91-05c7-4375-a638-2bb51d77e82a name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:42:54 functional-364120 crio[9872]: time="2025-12-16T06:42:54.049934157Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=d7315967-45e5-4ab2-b579-15a88e3c5cf5 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:42:54 functional-364120 crio[9872]: time="2025-12-16T06:42:54.050441213Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=d2d27746-f739-4711-a521-d245b78e775c name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.981581513Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=cc27c34f-1129-41fd-83b5-8698b0697603 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.982462832Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=f632e983-ad57-48b2-98c3-8802e4b6bb91 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.982972654Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=4d99c7a4-52a2-4a4f-9569-9d8a29ee230d name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.983463866Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=824a4ba3-63ed-49ce-a194-3bf34f462483 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.983972891Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=52cc52f0-f1ca-4fc4-a91a-13dd8c19e754 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.984501125Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=baf81d2d-269c-44fd-a82c-811876adf596 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.984974015Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=88fe0e4e-4ea7-4b38-a635-f3138f370377 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:51:01.993735   21333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:51:01.994925   21333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:51:01.996985   21333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:51:01.998160   21333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:51:01.998954   21333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 06:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec16 06:13] overlayfs: idmapped layers are currently not supported
	[Dec16 06:19] overlayfs: idmapped layers are currently not supported
	[Dec16 06:20] overlayfs: idmapped layers are currently not supported
	[Dec16 06:38] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 06:51:02 up  9:33,  0 user,  load average: 0.04, 0.14, 0.43
	Linux functional-364120 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 06:50:59 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:51:00 functional-364120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 965.
	Dec 16 06:51:00 functional-364120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:51:00 functional-364120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:51:00 functional-364120 kubelet[21209]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:51:00 functional-364120 kubelet[21209]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:51:00 functional-364120 kubelet[21209]: E1216 06:51:00.491614   21209 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:51:00 functional-364120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:51:00 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:51:01 functional-364120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 966.
	Dec 16 06:51:01 functional-364120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:51:01 functional-364120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:51:01 functional-364120 kubelet[21245]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:51:01 functional-364120 kubelet[21245]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:51:01 functional-364120 kubelet[21245]: E1216 06:51:01.212242   21245 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:51:01 functional-364120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:51:01 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:51:01 functional-364120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 967.
	Dec 16 06:51:01 functional-364120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:51:01 functional-364120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:51:01 functional-364120 kubelet[21325]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:51:01 functional-364120 kubelet[21325]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:51:01 functional-364120 kubelet[21325]: E1216 06:51:01.967923   21325 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:51:01 functional-364120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:51:01 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-364120 -n functional-364120
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-364120 -n functional-364120: exit status 2 (346.030628ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-364120" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-364120 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-364120 apply -f testdata/invalidsvc.yaml: exit status 1 (74.46171ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-364120 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-364120 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-364120 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-364120 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-364120 --alsologtostderr -v=1] stderr:
I1216 06:53:33.864351 1657105 out.go:360] Setting OutFile to fd 1 ...
I1216 06:53:33.864530 1657105 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 06:53:33.864551 1657105 out.go:374] Setting ErrFile to fd 2...
I1216 06:53:33.864567 1657105 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 06:53:33.864854 1657105 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
I1216 06:53:33.865131 1657105 mustload.go:66] Loading cluster: functional-364120
I1216 06:53:33.865580 1657105 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 06:53:33.866105 1657105 cli_runner.go:164] Run: docker container inspect functional-364120 --format={{.State.Status}}
I1216 06:53:33.883505 1657105 host.go:66] Checking if "functional-364120" exists ...
I1216 06:53:33.883840 1657105 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1216 06:53:33.937314 1657105 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 06:53:33.928085124 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1216 06:53:33.937461 1657105 api_server.go:166] Checking apiserver status ...
I1216 06:53:33.937535 1657105 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1216 06:53:33.937581 1657105 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
I1216 06:53:33.954627 1657105 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
W1216 06:53:34.058450 1657105 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1216 06:53:34.061748 1657105 out.go:179] * The control-plane node functional-364120 apiserver is not running: (state=Stopped)
I1216 06:53:34.064620 1657105 out.go:179]   To start a cluster, run: "minikube start -p functional-364120"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-364120
helpers_test.go:244: (dbg) docker inspect functional-364120:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf",
	        "Created": "2025-12-16T06:24:05.281524036Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1628059,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T06:24:05.346294886Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/hostname",
	        "HostsPath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/hosts",
	        "LogPath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf-json.log",
	        "Name": "/functional-364120",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-364120:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-364120",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf",
	                "LowerDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752-init/diff:/var/lib/docker/overlay2/bf9e5e3f04a34ae52d17b5e81aeacb3854428b2bda7b4fcb7e1d86558db759ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752/merged",
	                "UpperDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752/diff",
	                "WorkDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-364120",
	                "Source": "/var/lib/docker/volumes/functional-364120/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-364120",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-364120",
	                "name.minikube.sigs.k8s.io": "functional-364120",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ca8e444af5ea4dc220aae407b23205e89ee2c7bfaf0d7da28c0fa8a6e9438a0b",
	            "SandboxKey": "/var/run/docker/netns/ca8e444af5ea",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34260"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34261"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34264"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34262"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34263"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-364120": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:28:ec:c3:f0:f5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a6847428577f52c75d7f6ab7a92b3395c1204da1608971d5af98d3898a2210da",
	                    "EndpointID": "e579fd8a0ba117da836073d37b7f617933568bedfc3fb52e056b4772aaddecbf",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-364120",
	                        "8e0dcfb5d015"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-364120 -n functional-364120
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-364120 -n functional-364120: exit status 2 (309.569848ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service   │ functional-364120 service hello-node --url                                                                                                          │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ ssh       │ functional-364120 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ mount     │ -p functional-364120 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2352872432/001:/mount-9p --alsologtostderr -v=1              │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ ssh       │ functional-364120 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ ssh       │ functional-364120 ssh -- ls -la /mount-9p                                                                                                           │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ ssh       │ functional-364120 ssh cat /mount-9p/test-1765868003827322411                                                                                        │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ ssh       │ functional-364120 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ ssh       │ functional-364120 ssh sudo umount -f /mount-9p                                                                                                      │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ ssh       │ functional-364120 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ mount     │ -p functional-364120 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3205506699/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ ssh       │ functional-364120 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ ssh       │ functional-364120 ssh -- ls -la /mount-9p                                                                                                           │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ ssh       │ functional-364120 ssh sudo umount -f /mount-9p                                                                                                      │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ ssh       │ functional-364120 ssh findmnt -T /mount1                                                                                                            │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ mount     │ -p functional-364120 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1494651641/001:/mount1 --alsologtostderr -v=1                │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ mount     │ -p functional-364120 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1494651641/001:/mount2 --alsologtostderr -v=1                │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ mount     │ -p functional-364120 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1494651641/001:/mount3 --alsologtostderr -v=1                │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ ssh       │ functional-364120 ssh findmnt -T /mount1                                                                                                            │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ ssh       │ functional-364120 ssh findmnt -T /mount2                                                                                                            │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ ssh       │ functional-364120 ssh findmnt -T /mount3                                                                                                            │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ mount     │ -p functional-364120 --kill=true                                                                                                                    │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ start     │ -p functional-364120 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ start     │ -p functional-364120 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ start     │ -p functional-364120 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                 │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-364120 --alsologtostderr -v=1                                                                                      │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 06:53:33
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 06:53:33.621593 1657034 out.go:360] Setting OutFile to fd 1 ...
	I1216 06:53:33.621738 1657034 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:53:33.621751 1657034 out.go:374] Setting ErrFile to fd 2...
	I1216 06:53:33.621757 1657034 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:53:33.622008 1657034 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 06:53:33.622392 1657034 out.go:368] Setting JSON to false
	I1216 06:53:33.623268 1657034 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":34565,"bootTime":1765833449,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1216 06:53:33.623334 1657034 start.go:143] virtualization:  
	I1216 06:53:33.626568 1657034 out.go:179] * [functional-364120] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 06:53:33.630377 1657034 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 06:53:33.630472 1657034 notify.go:221] Checking for updates...
	I1216 06:53:33.636109 1657034 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 06:53:33.639064 1657034 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:53:33.641980 1657034 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube
	I1216 06:53:33.644866 1657034 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 06:53:33.647809 1657034 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 06:53:33.651292 1657034 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 06:53:33.651909 1657034 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 06:53:33.675417 1657034 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 06:53:33.675545 1657034 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:53:33.738351 1657034 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 06:53:33.728990657 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 06:53:33.738463 1657034 docker.go:319] overlay module found
	I1216 06:53:33.741436 1657034 out.go:179] * Using the docker driver based on existing profile
	I1216 06:53:33.744303 1657034 start.go:309] selected driver: docker
	I1216 06:53:33.744333 1657034 start.go:927] validating driver "docker" against &{Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:53:33.744446 1657034 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 06:53:33.744631 1657034 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:53:33.805595 1657034 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 06:53:33.791477969 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 06:53:33.806060 1657034 cni.go:84] Creating CNI manager for ""
	I1216 06:53:33.806123 1657034 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 06:53:33.806164 1657034 start.go:353] cluster config:
	{Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:53:33.809236 1657034 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553471769Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553507896Z" level=info msg="Starting seccomp notifier watcher"
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553554657Z" level=info msg="Create NRI interface"
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553657485Z" level=info msg="built-in NRI default validator is disabled"
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553665107Z" level=info msg="runtime interface created"
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553674699Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553680746Z" level=info msg="runtime interface starting up..."
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553686137Z" level=info msg="starting plugins..."
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553698814Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553771561Z" level=info msg="No systemd watchdog enabled"
	Dec 16 06:38:48 functional-364120 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 16 06:42:54 functional-364120 crio[9872]: time="2025-12-16T06:42:54.046654305Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=2afa36a7-e595-4e9e-9866-100014f74db0 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:42:54 functional-364120 crio[9872]: time="2025-12-16T06:42:54.047561496Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=bfee085e-d788-43aa-852e-e818968557f8 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:42:54 functional-364120 crio[9872]: time="2025-12-16T06:42:54.048165668Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=8209edd3-2ad3-4cea-9d15-760a1b94c10d name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:42:54 functional-364120 crio[9872]: time="2025-12-16T06:42:54.048839782Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=f38b3b25-171e-488b-9dbb-3a4615d07ce7 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:42:54 functional-364120 crio[9872]: time="2025-12-16T06:42:54.049385123Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=674d3a91-05c7-4375-a638-2bb51d77e82a name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:42:54 functional-364120 crio[9872]: time="2025-12-16T06:42:54.049934157Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=d7315967-45e5-4ab2-b579-15a88e3c5cf5 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:42:54 functional-364120 crio[9872]: time="2025-12-16T06:42:54.050441213Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=d2d27746-f739-4711-a521-d245b78e775c name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.981581513Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=cc27c34f-1129-41fd-83b5-8698b0697603 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.982462832Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=f632e983-ad57-48b2-98c3-8802e4b6bb91 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.982972654Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=4d99c7a4-52a2-4a4f-9569-9d8a29ee230d name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.983463866Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=824a4ba3-63ed-49ce-a194-3bf34f462483 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.983972891Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=52cc52f0-f1ca-4fc4-a91a-13dd8c19e754 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.984501125Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=baf81d2d-269c-44fd-a82c-811876adf596 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.984974015Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=88fe0e4e-4ea7-4b38-a635-f3138f370377 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:53:35.115620   23625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:53:35.116157   23625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:53:35.117831   23625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:53:35.118391   23625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:53:35.120087   23625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 06:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec16 06:13] overlayfs: idmapped layers are currently not supported
	[Dec16 06:19] overlayfs: idmapped layers are currently not supported
	[Dec16 06:20] overlayfs: idmapped layers are currently not supported
	[Dec16 06:38] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 06:53:35 up  9:36,  0 user,  load average: 0.55, 0.25, 0.43
	Linux functional-364120 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 06:53:32 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:53:33 functional-364120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1169.
	Dec 16 06:53:33 functional-364120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:53:33 functional-364120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:53:33 functional-364120 kubelet[23505]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:53:33 functional-364120 kubelet[23505]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:53:33 functional-364120 kubelet[23505]: E1216 06:53:33.456628   23505 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:53:33 functional-364120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:53:33 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:53:34 functional-364120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1170.
	Dec 16 06:53:34 functional-364120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:53:34 functional-364120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:53:34 functional-364120 kubelet[23520]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:53:34 functional-364120 kubelet[23520]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:53:34 functional-364120 kubelet[23520]: E1216 06:53:34.205499   23520 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:53:34 functional-364120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:53:34 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:53:34 functional-364120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1171.
	Dec 16 06:53:34 functional-364120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:53:34 functional-364120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:53:34 functional-364120 kubelet[23587]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:53:34 functional-364120 kubelet[23587]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:53:34 functional-364120 kubelet[23587]: E1216 06:53:34.971940   23587 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:53:34 functional-364120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:53:34 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-364120 -n functional-364120
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-364120 -n functional-364120: exit status 2 (304.060692ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-364120" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-364120 status: exit status 2 (371.662391ms)

                                                
                                                
-- stdout --
	functional-364120
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-arm64 -p functional-364120 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-364120 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (334.027836ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Stopped,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-arm64 -p functional-364120 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-364120 status -o json: exit status 2 (304.98828ms)

                                                
                                                
-- stdout --
	{"Name":"functional-364120","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-arm64 -p functional-364120 status -o json" : exit status 2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-364120
helpers_test.go:244: (dbg) docker inspect functional-364120:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf",
	        "Created": "2025-12-16T06:24:05.281524036Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1628059,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T06:24:05.346294886Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/hostname",
	        "HostsPath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/hosts",
	        "LogPath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf-json.log",
	        "Name": "/functional-364120",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-364120:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-364120",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf",
	                "LowerDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752-init/diff:/var/lib/docker/overlay2/bf9e5e3f04a34ae52d17b5e81aeacb3854428b2bda7b4fcb7e1d86558db759ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752/merged",
	                "UpperDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752/diff",
	                "WorkDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-364120",
	                "Source": "/var/lib/docker/volumes/functional-364120/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-364120",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-364120",
	                "name.minikube.sigs.k8s.io": "functional-364120",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ca8e444af5ea4dc220aae407b23205e89ee2c7bfaf0d7da28c0fa8a6e9438a0b",
	            "SandboxKey": "/var/run/docker/netns/ca8e444af5ea",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34260"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34261"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34264"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34262"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34263"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-364120": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:28:ec:c3:f0:f5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a6847428577f52c75d7f6ab7a92b3395c1204da1608971d5af98d3898a2210da",
	                    "EndpointID": "e579fd8a0ba117da836073d37b7f617933568bedfc3fb52e056b4772aaddecbf",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-364120",
	                        "8e0dcfb5d015"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-364120 -n functional-364120
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-364120 -n functional-364120: exit status 2 (333.577765ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service │ functional-364120 service list                                                                                                                      │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ service │ functional-364120 service list -o json                                                                                                              │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ service │ functional-364120 service --namespace=default --https --url hello-node                                                                              │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ service │ functional-364120 service hello-node --url --format={{.IP}}                                                                                         │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ service │ functional-364120 service hello-node --url                                                                                                          │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ ssh     │ functional-364120 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ mount   │ -p functional-364120 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2352872432/001:/mount-9p --alsologtostderr -v=1              │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ ssh     │ functional-364120 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ ssh     │ functional-364120 ssh -- ls -la /mount-9p                                                                                                           │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ ssh     │ functional-364120 ssh cat /mount-9p/test-1765868003827322411                                                                                        │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ ssh     │ functional-364120 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ ssh     │ functional-364120 ssh sudo umount -f /mount-9p                                                                                                      │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ ssh     │ functional-364120 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ mount   │ -p functional-364120 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3205506699/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ ssh     │ functional-364120 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ ssh     │ functional-364120 ssh -- ls -la /mount-9p                                                                                                           │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ ssh     │ functional-364120 ssh sudo umount -f /mount-9p                                                                                                      │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ ssh     │ functional-364120 ssh findmnt -T /mount1                                                                                                            │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ mount   │ -p functional-364120 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1494651641/001:/mount1 --alsologtostderr -v=1                │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ mount   │ -p functional-364120 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1494651641/001:/mount2 --alsologtostderr -v=1                │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ mount   │ -p functional-364120 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1494651641/001:/mount3 --alsologtostderr -v=1                │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ ssh     │ functional-364120 ssh findmnt -T /mount1                                                                                                            │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ ssh     │ functional-364120 ssh findmnt -T /mount2                                                                                                            │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ ssh     │ functional-364120 ssh findmnt -T /mount3                                                                                                            │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ mount   │ -p functional-364120 --kill=true                                                                                                                    │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 06:38:45
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 06:38:45.382114 1639474 out.go:360] Setting OutFile to fd 1 ...
	I1216 06:38:45.382275 1639474 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:38:45.382279 1639474 out.go:374] Setting ErrFile to fd 2...
	I1216 06:38:45.382283 1639474 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:38:45.382644 1639474 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 06:38:45.383081 1639474 out.go:368] Setting JSON to false
	I1216 06:38:45.383946 1639474 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":33677,"bootTime":1765833449,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1216 06:38:45.384032 1639474 start.go:143] virtualization:  
	I1216 06:38:45.387610 1639474 out.go:179] * [functional-364120] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 06:38:45.391422 1639474 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 06:38:45.391485 1639474 notify.go:221] Checking for updates...
	I1216 06:38:45.397275 1639474 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 06:38:45.400538 1639474 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:38:45.403348 1639474 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube
	I1216 06:38:45.406183 1639474 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 06:38:45.410019 1639474 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 06:38:45.413394 1639474 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 06:38:45.413485 1639474 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 06:38:45.451796 1639474 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 06:38:45.451901 1639474 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:38:45.529304 1639474 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-16 06:38:45.519310041 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 06:38:45.529400 1639474 docker.go:319] overlay module found
	I1216 06:38:45.532456 1639474 out.go:179] * Using the docker driver based on existing profile
	I1216 06:38:45.535342 1639474 start.go:309] selected driver: docker
	I1216 06:38:45.535352 1639474 start.go:927] validating driver "docker" against &{Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:38:45.535432 1639474 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 06:38:45.535555 1639474 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:38:45.605792 1639474 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-16 06:38:45.594564391 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 06:38:45.606168 1639474 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 06:38:45.606189 1639474 cni.go:84] Creating CNI manager for ""
	I1216 06:38:45.606237 1639474 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 06:38:45.606285 1639474 start.go:353] cluster config:
	{Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:38:45.611347 1639474 out.go:179] * Starting "functional-364120" primary control-plane node in "functional-364120" cluster
	I1216 06:38:45.614388 1639474 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 06:38:45.617318 1639474 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 06:38:45.620204 1639474 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 06:38:45.620247 1639474 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1216 06:38:45.620256 1639474 cache.go:65] Caching tarball of preloaded images
	I1216 06:38:45.620287 1639474 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 06:38:45.620351 1639474 preload.go:238] Found /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 06:38:45.620360 1639474 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1216 06:38:45.620487 1639474 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/config.json ...
	I1216 06:38:45.639567 1639474 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 06:38:45.639578 1639474 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 06:38:45.639591 1639474 cache.go:243] Successfully downloaded all kic artifacts
	I1216 06:38:45.639630 1639474 start.go:360] acquireMachinesLock for functional-364120: {Name:mkbf042218fd4d1baa11f8b1e4a71170f4ad9912 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:38:45.639687 1639474 start.go:364] duration metric: took 37.908µs to acquireMachinesLock for "functional-364120"
	I1216 06:38:45.639706 1639474 start.go:96] Skipping create...Using existing machine configuration
	I1216 06:38:45.639711 1639474 fix.go:54] fixHost starting: 
	I1216 06:38:45.639996 1639474 cli_runner.go:164] Run: docker container inspect functional-364120 --format={{.State.Status}}
	I1216 06:38:45.656952 1639474 fix.go:112] recreateIfNeeded on functional-364120: state=Running err=<nil>
	W1216 06:38:45.656970 1639474 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 06:38:45.660116 1639474 out.go:252] * Updating the running docker "functional-364120" container ...
	I1216 06:38:45.660138 1639474 machine.go:94] provisionDockerMachine start ...
	I1216 06:38:45.660218 1639474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:38:45.677387 1639474 main.go:143] libmachine: Using SSH client type: native
	I1216 06:38:45.677705 1639474 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1216 06:38:45.677711 1639474 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 06:38:45.812247 1639474 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-364120
	
	I1216 06:38:45.812262 1639474 ubuntu.go:182] provisioning hostname "functional-364120"
	I1216 06:38:45.812325 1639474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:38:45.830038 1639474 main.go:143] libmachine: Using SSH client type: native
	I1216 06:38:45.830333 1639474 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1216 06:38:45.830342 1639474 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-364120 && echo "functional-364120" | sudo tee /etc/hostname
	I1216 06:38:45.969440 1639474 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-364120
	
	I1216 06:38:45.969519 1639474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:38:45.987438 1639474 main.go:143] libmachine: Using SSH client type: native
	I1216 06:38:45.987738 1639474 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1216 06:38:45.987751 1639474 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-364120' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-364120/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-364120' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 06:38:46.120750 1639474 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 06:38:46.120766 1639474 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-1596013/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-1596013/.minikube}
	I1216 06:38:46.120795 1639474 ubuntu.go:190] setting up certificates
	I1216 06:38:46.120811 1639474 provision.go:84] configureAuth start
	I1216 06:38:46.120880 1639474 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-364120
	I1216 06:38:46.139450 1639474 provision.go:143] copyHostCerts
	I1216 06:38:46.139518 1639474 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem, removing ...
	I1216 06:38:46.139535 1639474 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 06:38:46.139611 1639474 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem (1078 bytes)
	I1216 06:38:46.139701 1639474 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem, removing ...
	I1216 06:38:46.139705 1639474 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 06:38:46.139730 1639474 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem (1123 bytes)
	I1216 06:38:46.139777 1639474 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem, removing ...
	I1216 06:38:46.139780 1639474 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 06:38:46.139802 1639474 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem (1675 bytes)
	I1216 06:38:46.139846 1639474 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem org=jenkins.functional-364120 san=[127.0.0.1 192.168.49.2 functional-364120 localhost minikube]
	I1216 06:38:46.453267 1639474 provision.go:177] copyRemoteCerts
	I1216 06:38:46.453323 1639474 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 06:38:46.453367 1639474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:38:46.472384 1639474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:38:46.568304 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 06:38:46.585458 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 06:38:46.602822 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 06:38:46.619947 1639474 provision.go:87] duration metric: took 499.122604ms to configureAuth
	I1216 06:38:46.619964 1639474 ubuntu.go:206] setting minikube options for container-runtime
	I1216 06:38:46.620160 1639474 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 06:38:46.620252 1639474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:38:46.637350 1639474 main.go:143] libmachine: Using SSH client type: native
	I1216 06:38:46.637660 1639474 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1216 06:38:46.637671 1639474 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 06:38:46.957629 1639474 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 06:38:46.957641 1639474 machine.go:97] duration metric: took 1.297496853s to provisionDockerMachine
	I1216 06:38:46.957652 1639474 start.go:293] postStartSetup for "functional-364120" (driver="docker")
	I1216 06:38:46.957670 1639474 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 06:38:46.957741 1639474 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 06:38:46.957790 1639474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:38:46.978202 1639474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:38:47.080335 1639474 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 06:38:47.083578 1639474 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 06:38:47.083597 1639474 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 06:38:47.083607 1639474 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/addons for local assets ...
	I1216 06:38:47.083662 1639474 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/files for local assets ...
	I1216 06:38:47.083735 1639474 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> 15992552.pem in /etc/ssl/certs
	I1216 06:38:47.083808 1639474 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/test/nested/copy/1599255/hosts -> hosts in /etc/test/nested/copy/1599255
	I1216 06:38:47.083855 1639474 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1599255
	I1216 06:38:47.091346 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 06:38:47.108874 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/test/nested/copy/1599255/hosts --> /etc/test/nested/copy/1599255/hosts (40 bytes)
	I1216 06:38:47.126774 1639474 start.go:296] duration metric: took 169.103296ms for postStartSetup
	I1216 06:38:47.126870 1639474 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 06:38:47.126918 1639474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:38:47.145224 1639474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:38:47.237421 1639474 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 06:38:47.242526 1639474 fix.go:56] duration metric: took 1.602809118s for fixHost
	I1216 06:38:47.242542 1639474 start.go:83] releasing machines lock for "functional-364120", held for 1.602847814s
	I1216 06:38:47.242635 1639474 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-364120
	I1216 06:38:47.260121 1639474 ssh_runner.go:195] Run: cat /version.json
	I1216 06:38:47.260167 1639474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:38:47.260174 1639474 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 06:38:47.260224 1639474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:38:47.277503 1639474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:38:47.283903 1639474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:38:47.464356 1639474 ssh_runner.go:195] Run: systemctl --version
	I1216 06:38:47.476410 1639474 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 06:38:47.514461 1639474 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 06:38:47.518820 1639474 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 06:38:47.518882 1639474 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 06:38:47.526809 1639474 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 06:38:47.526823 1639474 start.go:496] detecting cgroup driver to use...
	I1216 06:38:47.526855 1639474 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:38:47.526909 1639474 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 06:38:47.542915 1639474 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:38:47.556456 1639474 docker.go:218] disabling cri-docker service (if available) ...
	I1216 06:38:47.556532 1639474 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 06:38:47.572387 1639474 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 06:38:47.585623 1639474 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 06:38:47.693830 1639474 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 06:38:47.836192 1639474 docker.go:234] disabling docker service ...
	I1216 06:38:47.836253 1639474 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 06:38:47.851681 1639474 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 06:38:47.865315 1639474 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 06:38:47.985223 1639474 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 06:38:48.104393 1639474 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 06:38:48.118661 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 06:38:48.136892 1639474 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 06:38:48.136961 1639474 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:38:48.147508 1639474 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 06:38:48.147579 1639474 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:38:48.156495 1639474 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:38:48.165780 1639474 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:38:48.174392 1639474 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 06:38:48.182433 1639474 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:38:48.191004 1639474 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:38:48.198914 1639474 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:38:48.207365 1639474 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 06:38:48.214548 1639474 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 06:38:48.221727 1639474 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:38:48.346771 1639474 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 06:38:48.562751 1639474 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 06:38:48.562822 1639474 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 06:38:48.566564 1639474 start.go:564] Will wait 60s for crictl version
	I1216 06:38:48.566626 1639474 ssh_runner.go:195] Run: which crictl
	I1216 06:38:48.570268 1639474 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 06:38:48.600286 1639474 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 06:38:48.600360 1639474 ssh_runner.go:195] Run: crio --version
	I1216 06:38:48.630102 1639474 ssh_runner.go:195] Run: crio --version
	I1216 06:38:48.662511 1639474 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1216 06:38:48.665401 1639474 cli_runner.go:164] Run: docker network inspect functional-364120 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 06:38:48.681394 1639474 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 06:38:48.688428 1639474 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1216 06:38:48.691264 1639474 kubeadm.go:884] updating cluster {Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 06:38:48.691424 1639474 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 06:38:48.691501 1639474 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 06:38:48.730823 1639474 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 06:38:48.730835 1639474 crio.go:433] Images already preloaded, skipping extraction
	I1216 06:38:48.730892 1639474 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 06:38:48.756054 1639474 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 06:38:48.756075 1639474 cache_images.go:86] Images are preloaded, skipping loading
	I1216 06:38:48.756081 1639474 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1216 06:38:48.756185 1639474 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-364120 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 06:38:48.756284 1639474 ssh_runner.go:195] Run: crio config
	I1216 06:38:48.821920 1639474 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1216 06:38:48.821940 1639474 cni.go:84] Creating CNI manager for ""
	I1216 06:38:48.821953 1639474 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 06:38:48.821961 1639474 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 06:38:48.821989 1639474 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-364120 NodeName:functional-364120 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 06:38:48.822118 1639474 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-364120"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 06:38:48.822186 1639474 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 06:38:48.830098 1639474 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 06:38:48.830166 1639474 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 06:38:48.837393 1639474 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1216 06:38:48.849769 1639474 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 06:38:48.862224 1639474 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1216 06:38:48.875020 1639474 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 06:38:48.878641 1639474 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:38:48.988462 1639474 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:38:49.398022 1639474 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120 for IP: 192.168.49.2
	I1216 06:38:49.398033 1639474 certs.go:195] generating shared ca certs ...
	I1216 06:38:49.398047 1639474 certs.go:227] acquiring lock for ca certs: {Name:mkbf72d2e438185e2867d262e148d82e5455cccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:38:49.398216 1639474 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key
	I1216 06:38:49.398259 1639474 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key
	I1216 06:38:49.398266 1639474 certs.go:257] generating profile certs ...
	I1216 06:38:49.398355 1639474 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.key
	I1216 06:38:49.398397 1639474 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.key.a6be103a
	I1216 06:38:49.398442 1639474 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.key
	I1216 06:38:49.398557 1639474 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem (1338 bytes)
	W1216 06:38:49.398591 1639474 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255_empty.pem, impossibly tiny 0 bytes
	I1216 06:38:49.398598 1639474 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 06:38:49.398627 1639474 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem (1078 bytes)
	I1216 06:38:49.398648 1639474 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem (1123 bytes)
	I1216 06:38:49.398673 1639474 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem (1675 bytes)
	I1216 06:38:49.398722 1639474 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 06:38:49.399378 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 06:38:49.420435 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 06:38:49.440537 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 06:38:49.460786 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 06:38:49.480628 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 06:38:49.497487 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 06:38:49.514939 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 06:38:49.532313 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 06:38:49.550215 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem --> /usr/share/ca-certificates/1599255.pem (1338 bytes)
	I1216 06:38:49.580225 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /usr/share/ca-certificates/15992552.pem (1708 bytes)
	I1216 06:38:49.597583 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 06:38:49.615627 1639474 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 06:38:49.629067 1639474 ssh_runner.go:195] Run: openssl version
	I1216 06:38:49.635264 1639474 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1599255.pem
	I1216 06:38:49.642707 1639474 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1599255.pem /etc/ssl/certs/1599255.pem
	I1216 06:38:49.650527 1639474 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1599255.pem
	I1216 06:38:49.654313 1639474 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 06:24 /usr/share/ca-certificates/1599255.pem
	I1216 06:38:49.654369 1639474 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1599255.pem
	I1216 06:38:49.695142 1639474 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 06:38:49.702542 1639474 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15992552.pem
	I1216 06:38:49.709833 1639474 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15992552.pem /etc/ssl/certs/15992552.pem
	I1216 06:38:49.717202 1639474 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15992552.pem
	I1216 06:38:49.720835 1639474 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 06:24 /usr/share/ca-certificates/15992552.pem
	I1216 06:38:49.720891 1639474 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15992552.pem
	I1216 06:38:49.762100 1639474 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 06:38:49.769702 1639474 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:38:49.777475 1639474 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 06:38:49.785134 1639474 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:38:49.789017 1639474 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:38:49.789075 1639474 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:38:49.830097 1639474 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 06:38:49.837887 1639474 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 06:38:49.841718 1639474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 06:38:49.883003 1639474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 06:38:49.923792 1639474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 06:38:49.964873 1639474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 06:38:50.009367 1639474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 06:38:50.051701 1639474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 06:38:50.093263 1639474 kubeadm.go:401] StartCluster: {Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:38:50.093349 1639474 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 06:38:50.093423 1639474 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 06:38:50.120923 1639474 cri.go:89] found id: ""
	I1216 06:38:50.120988 1639474 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 06:38:50.128935 1639474 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 06:38:50.128944 1639474 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 06:38:50.129001 1639474 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 06:38:50.136677 1639474 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 06:38:50.137223 1639474 kubeconfig.go:125] found "functional-364120" server: "https://192.168.49.2:8441"
	I1216 06:38:50.138591 1639474 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 06:38:50.148403 1639474 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-16 06:24:13.753381452 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-16 06:38:48.871691407 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1216 06:38:50.148423 1639474 kubeadm.go:1161] stopping kube-system containers ...
	I1216 06:38:50.148434 1639474 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 06:38:50.148512 1639474 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 06:38:50.182168 1639474 cri.go:89] found id: ""
	I1216 06:38:50.182231 1639474 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 06:38:50.201521 1639474 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 06:38:50.209281 1639474 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 16 06:28 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 16 06:28 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec 16 06:28 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 16 06:28 /etc/kubernetes/scheduler.conf
	
	I1216 06:38:50.209338 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 06:38:50.217195 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 06:38:50.224648 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 06:38:50.224702 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 06:38:50.231990 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 06:38:50.239836 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 06:38:50.239894 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 06:38:50.247352 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 06:38:50.254862 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 06:38:50.254916 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 06:38:50.262178 1639474 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 06:38:50.270092 1639474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 06:38:50.316982 1639474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 06:38:51.327287 1639474 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.010279379s)
	I1216 06:38:51.327357 1639474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 06:38:51.524152 1639474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 06:38:51.584718 1639474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 06:38:51.627519 1639474 api_server.go:52] waiting for apiserver process to appear ...
	I1216 06:38:51.627603 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:52.127996 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:52.628298 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:53.128739 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:53.628621 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:54.128741 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:54.627831 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:55.128517 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:55.628413 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:56.127788 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:56.627801 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:57.128288 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:57.628401 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:58.128329 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:58.627998 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:59.127831 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:59.628547 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:00.128439 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:00.628540 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:01.128146 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:01.627790 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:02.128721 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:02.628766 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:03.127780 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:03.628489 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:04.128439 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:04.627784 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:05.128544 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:05.627790 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:06.128535 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:06.627955 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:07.127765 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:07.627817 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:08.128692 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:08.628069 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:09.127788 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:09.627921 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:10.128708 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:10.627689 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:11.127821 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:11.627890 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:12.127687 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:12.628412 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:13.128182 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:13.627796 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:14.128611 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:14.628298 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:15.127795 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:15.628147 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:16.127806 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:16.627762 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:17.127677 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:17.628043 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:18.127752 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:18.627697 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:19.128437 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:19.627779 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:20.128353 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:20.628739 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:21.128542 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:21.628449 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:22.127780 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:22.628679 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:23.128464 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:23.628609 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:24.127698 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:24.628073 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:25.128615 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:25.627743 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:26.127794 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:26.628605 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:27.128439 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:27.627806 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:28.128571 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:28.628042 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:29.128637 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:29.627742 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:30.128694 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:30.627803 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:31.127790 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:31.628497 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:32.127786 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:32.627780 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:33.127788 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:33.627974 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:34.128440 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:34.628685 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:35.128622 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:35.628715 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:36.128328 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:36.628129 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:37.127678 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:37.628187 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:38.128724 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:38.627765 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:39.127823 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:39.627834 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:40.128417 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:40.628784 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:41.128501 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:41.628458 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:42.128381 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:42.627888 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:43.128387 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:43.627769 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:44.128638 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:44.627687 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:45.128571 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:45.628346 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:46.128443 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:46.628500 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:47.128632 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:47.628608 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:48.128412 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:48.628099 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:49.128601 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:49.627888 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:50.127801 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:50.628098 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:51.127749 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:51.627803 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:39:51.627880 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:39:51.662321 1639474 cri.go:89] found id: ""
	I1216 06:39:51.662334 1639474 logs.go:282] 0 containers: []
	W1216 06:39:51.662341 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:39:51.662347 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:39:51.662418 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:39:51.693006 1639474 cri.go:89] found id: ""
	I1216 06:39:51.693020 1639474 logs.go:282] 0 containers: []
	W1216 06:39:51.693027 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:39:51.693032 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:39:51.693091 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:39:51.719156 1639474 cri.go:89] found id: ""
	I1216 06:39:51.719169 1639474 logs.go:282] 0 containers: []
	W1216 06:39:51.719176 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:39:51.719181 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:39:51.719237 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:39:51.745402 1639474 cri.go:89] found id: ""
	I1216 06:39:51.745416 1639474 logs.go:282] 0 containers: []
	W1216 06:39:51.745423 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:39:51.745429 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:39:51.745492 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:39:51.771770 1639474 cri.go:89] found id: ""
	I1216 06:39:51.771784 1639474 logs.go:282] 0 containers: []
	W1216 06:39:51.771791 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:39:51.771796 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:39:51.771854 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:39:51.797172 1639474 cri.go:89] found id: ""
	I1216 06:39:51.797186 1639474 logs.go:282] 0 containers: []
	W1216 06:39:51.797192 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:39:51.797198 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:39:51.797257 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:39:51.825478 1639474 cri.go:89] found id: ""
	I1216 06:39:51.825492 1639474 logs.go:282] 0 containers: []
	W1216 06:39:51.825499 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:39:51.825506 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:39:51.825516 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:39:51.897574 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:39:51.897593 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:39:51.925635 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:39:51.925652 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:39:51.993455 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:39:51.993477 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:39:52.027866 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:39:52.027883 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:39:52.096535 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:39:52.087042   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:52.087741   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:52.089643   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:52.090378   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:52.092367   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:39:52.087042   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:52.087741   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:52.089643   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:52.090378   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:52.092367   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:39:54.597178 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:54.607445 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:39:54.607507 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:39:54.634705 1639474 cri.go:89] found id: ""
	I1216 06:39:54.634719 1639474 logs.go:282] 0 containers: []
	W1216 06:39:54.634733 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:39:54.634739 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:39:54.634800 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:39:54.668209 1639474 cri.go:89] found id: ""
	I1216 06:39:54.668223 1639474 logs.go:282] 0 containers: []
	W1216 06:39:54.668230 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:39:54.668235 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:39:54.668293 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:39:54.703300 1639474 cri.go:89] found id: ""
	I1216 06:39:54.703314 1639474 logs.go:282] 0 containers: []
	W1216 06:39:54.703321 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:39:54.703326 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:39:54.703385 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:39:54.732154 1639474 cri.go:89] found id: ""
	I1216 06:39:54.732168 1639474 logs.go:282] 0 containers: []
	W1216 06:39:54.732175 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:39:54.732180 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:39:54.732241 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:39:54.758222 1639474 cri.go:89] found id: ""
	I1216 06:39:54.758237 1639474 logs.go:282] 0 containers: []
	W1216 06:39:54.758244 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:39:54.758249 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:39:54.758309 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:39:54.783433 1639474 cri.go:89] found id: ""
	I1216 06:39:54.783456 1639474 logs.go:282] 0 containers: []
	W1216 06:39:54.783463 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:39:54.783474 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:39:54.783544 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:39:54.811264 1639474 cri.go:89] found id: ""
	I1216 06:39:54.811277 1639474 logs.go:282] 0 containers: []
	W1216 06:39:54.811284 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:39:54.811291 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:39:54.811302 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:39:54.876784 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:39:54.876805 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:39:54.891733 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:39:54.891749 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:39:54.963951 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:39:54.956444   11053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:54.956899   11053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:54.958408   11053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:54.958719   11053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:54.960134   11053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:39:54.956444   11053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:54.956899   11053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:54.958408   11053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:54.958719   11053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:54.960134   11053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:39:54.963962 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:39:54.963975 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:39:55.036358 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:39:55.036380 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:39:57.569339 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:57.579596 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:39:57.579659 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:39:57.604959 1639474 cri.go:89] found id: ""
	I1216 06:39:57.604973 1639474 logs.go:282] 0 containers: []
	W1216 06:39:57.604980 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:39:57.604985 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:39:57.605045 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:39:57.630710 1639474 cri.go:89] found id: ""
	I1216 06:39:57.630725 1639474 logs.go:282] 0 containers: []
	W1216 06:39:57.630731 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:39:57.630736 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:39:57.630794 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:39:57.662734 1639474 cri.go:89] found id: ""
	I1216 06:39:57.662748 1639474 logs.go:282] 0 containers: []
	W1216 06:39:57.662756 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:39:57.662773 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:39:57.662838 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:39:57.699847 1639474 cri.go:89] found id: ""
	I1216 06:39:57.699868 1639474 logs.go:282] 0 containers: []
	W1216 06:39:57.699875 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:39:57.699880 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:39:57.699941 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:39:57.726549 1639474 cri.go:89] found id: ""
	I1216 06:39:57.726563 1639474 logs.go:282] 0 containers: []
	W1216 06:39:57.726570 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:39:57.726575 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:39:57.726639 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:39:57.752583 1639474 cri.go:89] found id: ""
	I1216 06:39:57.752597 1639474 logs.go:282] 0 containers: []
	W1216 06:39:57.752604 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:39:57.752609 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:39:57.752667 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:39:57.780752 1639474 cri.go:89] found id: ""
	I1216 06:39:57.780767 1639474 logs.go:282] 0 containers: []
	W1216 06:39:57.780774 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:39:57.780782 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:39:57.780793 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:39:57.846931 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:39:57.846952 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:39:57.862606 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:39:57.862623 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:39:57.928743 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:39:57.917946   11160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:57.918582   11160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:57.920325   11160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:57.920838   11160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:57.922560   11160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:39:57.917946   11160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:57.918582   11160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:57.920325   11160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:57.920838   11160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:57.922560   11160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:39:57.928764 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:39:57.928775 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:39:57.997232 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:39:57.997254 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:00.537687 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:00.558059 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:00.558144 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:00.594907 1639474 cri.go:89] found id: ""
	I1216 06:40:00.594929 1639474 logs.go:282] 0 containers: []
	W1216 06:40:00.594939 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:00.594953 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:00.595036 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:00.628243 1639474 cri.go:89] found id: ""
	I1216 06:40:00.628272 1639474 logs.go:282] 0 containers: []
	W1216 06:40:00.628280 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:00.628294 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:00.628377 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:00.667757 1639474 cri.go:89] found id: ""
	I1216 06:40:00.667773 1639474 logs.go:282] 0 containers: []
	W1216 06:40:00.667791 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:00.667797 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:00.667873 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:00.707304 1639474 cri.go:89] found id: ""
	I1216 06:40:00.707319 1639474 logs.go:282] 0 containers: []
	W1216 06:40:00.707327 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:00.707333 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:00.707413 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:00.742620 1639474 cri.go:89] found id: ""
	I1216 06:40:00.742636 1639474 logs.go:282] 0 containers: []
	W1216 06:40:00.742644 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:00.742650 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:00.742727 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:00.772404 1639474 cri.go:89] found id: ""
	I1216 06:40:00.772421 1639474 logs.go:282] 0 containers: []
	W1216 06:40:00.772429 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:00.772435 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:00.772526 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:00.800238 1639474 cri.go:89] found id: ""
	I1216 06:40:00.800253 1639474 logs.go:282] 0 containers: []
	W1216 06:40:00.800260 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:00.800268 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:00.800280 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:00.866967 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:00.866989 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:00.883111 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:00.883127 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:00.951359 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:00.942477   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:00.943153   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:00.944836   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:00.945488   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:00.947367   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:00.942477   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:00.943153   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:00.944836   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:00.945488   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:00.947367   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:00.951371 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:00.951382 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:01.020844 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:01.020870 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:03.552704 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:03.563452 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:03.563545 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:03.588572 1639474 cri.go:89] found id: ""
	I1216 06:40:03.588585 1639474 logs.go:282] 0 containers: []
	W1216 06:40:03.588592 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:03.588598 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:03.588665 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:03.617698 1639474 cri.go:89] found id: ""
	I1216 06:40:03.617712 1639474 logs.go:282] 0 containers: []
	W1216 06:40:03.617719 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:03.617724 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:03.617784 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:03.643270 1639474 cri.go:89] found id: ""
	I1216 06:40:03.643285 1639474 logs.go:282] 0 containers: []
	W1216 06:40:03.643291 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:03.643296 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:03.643356 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:03.679135 1639474 cri.go:89] found id: ""
	I1216 06:40:03.679148 1639474 logs.go:282] 0 containers: []
	W1216 06:40:03.679155 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:03.679160 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:03.679217 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:03.707978 1639474 cri.go:89] found id: ""
	I1216 06:40:03.707991 1639474 logs.go:282] 0 containers: []
	W1216 06:40:03.707998 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:03.708003 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:03.708071 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:03.741796 1639474 cri.go:89] found id: ""
	I1216 06:40:03.741821 1639474 logs.go:282] 0 containers: []
	W1216 06:40:03.741827 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:03.741832 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:03.741899 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:03.767959 1639474 cri.go:89] found id: ""
	I1216 06:40:03.767983 1639474 logs.go:282] 0 containers: []
	W1216 06:40:03.767991 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:03.767998 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:03.768009 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:03.833601 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:03.833622 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:03.848136 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:03.848154 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:03.911646 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:03.902948   11373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:03.903628   11373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:03.905247   11373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:03.905737   11373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:03.907239   11373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:03.902948   11373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:03.903628   11373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:03.905247   11373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:03.905737   11373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:03.907239   11373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:03.911661 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:03.911672 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:03.980874 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:03.980894 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:06.512671 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:06.522859 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:06.522944 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:06.552384 1639474 cri.go:89] found id: ""
	I1216 06:40:06.552399 1639474 logs.go:282] 0 containers: []
	W1216 06:40:06.552406 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:06.552411 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:06.552492 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:06.577262 1639474 cri.go:89] found id: ""
	I1216 06:40:06.577276 1639474 logs.go:282] 0 containers: []
	W1216 06:40:06.577293 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:06.577299 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:06.577357 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:06.603757 1639474 cri.go:89] found id: ""
	I1216 06:40:06.603772 1639474 logs.go:282] 0 containers: []
	W1216 06:40:06.603779 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:06.603784 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:06.603850 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:06.629717 1639474 cri.go:89] found id: ""
	I1216 06:40:06.629732 1639474 logs.go:282] 0 containers: []
	W1216 06:40:06.629751 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:06.629756 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:06.629846 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:06.665006 1639474 cri.go:89] found id: ""
	I1216 06:40:06.665031 1639474 logs.go:282] 0 containers: []
	W1216 06:40:06.665039 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:06.665044 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:06.665109 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:06.698777 1639474 cri.go:89] found id: ""
	I1216 06:40:06.698791 1639474 logs.go:282] 0 containers: []
	W1216 06:40:06.698807 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:06.698813 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:06.698879 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:06.727424 1639474 cri.go:89] found id: ""
	I1216 06:40:06.727448 1639474 logs.go:282] 0 containers: []
	W1216 06:40:06.727455 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:06.727464 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:06.727475 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:06.758535 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:06.758552 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:06.827915 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:06.827944 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:06.843925 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:06.843949 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:06.913118 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:06.904403   11493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:06.905354   11493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:06.907146   11493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:06.907549   11493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:06.909175   11493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:06.904403   11493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:06.905354   11493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:06.907146   11493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:06.907549   11493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:06.909175   11493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:06.913128 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:06.913140 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:09.481120 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:09.491592 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:09.491658 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:09.518336 1639474 cri.go:89] found id: ""
	I1216 06:40:09.518351 1639474 logs.go:282] 0 containers: []
	W1216 06:40:09.518358 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:09.518363 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:09.518423 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:09.547930 1639474 cri.go:89] found id: ""
	I1216 06:40:09.547943 1639474 logs.go:282] 0 containers: []
	W1216 06:40:09.547950 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:09.547955 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:09.548012 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:09.574921 1639474 cri.go:89] found id: ""
	I1216 06:40:09.574935 1639474 logs.go:282] 0 containers: []
	W1216 06:40:09.574942 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:09.574947 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:09.575008 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:09.600427 1639474 cri.go:89] found id: ""
	I1216 06:40:09.600495 1639474 logs.go:282] 0 containers: []
	W1216 06:40:09.600502 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:09.600508 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:09.600567 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:09.628992 1639474 cri.go:89] found id: ""
	I1216 06:40:09.629006 1639474 logs.go:282] 0 containers: []
	W1216 06:40:09.629015 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:09.629019 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:09.629080 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:09.667383 1639474 cri.go:89] found id: ""
	I1216 06:40:09.667397 1639474 logs.go:282] 0 containers: []
	W1216 06:40:09.667404 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:09.667409 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:09.667468 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:09.710169 1639474 cri.go:89] found id: ""
	I1216 06:40:09.710183 1639474 logs.go:282] 0 containers: []
	W1216 06:40:09.710190 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:09.710197 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:09.710208 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:09.776054 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:09.776075 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:09.790720 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:09.790736 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:09.855182 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:09.847489   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:09.848014   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:09.849514   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:09.849979   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:09.851407   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:09.847489   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:09.848014   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:09.849514   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:09.849979   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:09.851407   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:09.855192 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:09.855204 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:09.922382 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:09.922402 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:12.451670 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:12.461890 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:12.461962 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:12.486630 1639474 cri.go:89] found id: ""
	I1216 06:40:12.486644 1639474 logs.go:282] 0 containers: []
	W1216 06:40:12.486650 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:12.486657 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:12.486719 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:12.514531 1639474 cri.go:89] found id: ""
	I1216 06:40:12.514545 1639474 logs.go:282] 0 containers: []
	W1216 06:40:12.514551 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:12.514558 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:12.514621 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:12.541612 1639474 cri.go:89] found id: ""
	I1216 06:40:12.541627 1639474 logs.go:282] 0 containers: []
	W1216 06:40:12.541633 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:12.541638 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:12.541703 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:12.567638 1639474 cri.go:89] found id: ""
	I1216 06:40:12.567652 1639474 logs.go:282] 0 containers: []
	W1216 06:40:12.567659 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:12.567664 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:12.567723 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:12.593074 1639474 cri.go:89] found id: ""
	I1216 06:40:12.593089 1639474 logs.go:282] 0 containers: []
	W1216 06:40:12.593096 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:12.593101 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:12.593164 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:12.621872 1639474 cri.go:89] found id: ""
	I1216 06:40:12.621886 1639474 logs.go:282] 0 containers: []
	W1216 06:40:12.621893 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:12.621898 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:12.621954 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:12.658898 1639474 cri.go:89] found id: ""
	I1216 06:40:12.658912 1639474 logs.go:282] 0 containers: []
	W1216 06:40:12.658919 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:12.658927 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:12.658939 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:12.736529 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:12.727901   11689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:12.728778   11689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:12.730401   11689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:12.730782   11689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:12.732350   11689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:12.727901   11689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:12.728778   11689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:12.730401   11689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:12.730782   11689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:12.732350   11689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:12.736540 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:12.736551 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:12.804860 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:12.804881 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:12.834018 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:12.834036 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:12.903542 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:12.903564 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:15.418582 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:15.428941 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:15.429002 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:15.458081 1639474 cri.go:89] found id: ""
	I1216 06:40:15.458096 1639474 logs.go:282] 0 containers: []
	W1216 06:40:15.458103 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:15.458109 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:15.458172 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:15.487644 1639474 cri.go:89] found id: ""
	I1216 06:40:15.487658 1639474 logs.go:282] 0 containers: []
	W1216 06:40:15.487665 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:15.487670 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:15.487729 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:15.512552 1639474 cri.go:89] found id: ""
	I1216 06:40:15.512565 1639474 logs.go:282] 0 containers: []
	W1216 06:40:15.512572 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:15.512577 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:15.512646 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:15.537944 1639474 cri.go:89] found id: ""
	I1216 06:40:15.537958 1639474 logs.go:282] 0 containers: []
	W1216 06:40:15.537965 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:15.537971 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:15.538030 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:15.574197 1639474 cri.go:89] found id: ""
	I1216 06:40:15.574211 1639474 logs.go:282] 0 containers: []
	W1216 06:40:15.574218 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:15.574223 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:15.574289 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:15.603183 1639474 cri.go:89] found id: ""
	I1216 06:40:15.603197 1639474 logs.go:282] 0 containers: []
	W1216 06:40:15.603204 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:15.603209 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:15.603272 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:15.628682 1639474 cri.go:89] found id: ""
	I1216 06:40:15.628696 1639474 logs.go:282] 0 containers: []
	W1216 06:40:15.628703 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:15.628710 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:15.628720 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:15.716665 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:15.704236   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:15.709021   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:15.710701   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:15.711201   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:15.712773   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:15.704236   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:15.709021   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:15.710701   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:15.711201   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:15.712773   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:15.716676 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:15.716687 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:15.787785 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:15.787806 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:15.815751 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:15.815772 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:15.885879 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:15.885902 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:18.402627 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:18.413143 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:18.413213 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:18.439934 1639474 cri.go:89] found id: ""
	I1216 06:40:18.439948 1639474 logs.go:282] 0 containers: []
	W1216 06:40:18.439956 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:18.439961 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:18.440023 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:18.467477 1639474 cri.go:89] found id: ""
	I1216 06:40:18.467491 1639474 logs.go:282] 0 containers: []
	W1216 06:40:18.467498 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:18.467503 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:18.467564 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:18.492982 1639474 cri.go:89] found id: ""
	I1216 06:40:18.493002 1639474 logs.go:282] 0 containers: []
	W1216 06:40:18.493009 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:18.493013 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:18.493073 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:18.519158 1639474 cri.go:89] found id: ""
	I1216 06:40:18.519173 1639474 logs.go:282] 0 containers: []
	W1216 06:40:18.519180 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:18.519185 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:18.519250 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:18.544672 1639474 cri.go:89] found id: ""
	I1216 06:40:18.544687 1639474 logs.go:282] 0 containers: []
	W1216 06:40:18.544694 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:18.544699 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:18.544760 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:18.574100 1639474 cri.go:89] found id: ""
	I1216 06:40:18.574115 1639474 logs.go:282] 0 containers: []
	W1216 06:40:18.574122 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:18.574127 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:18.574190 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:18.600048 1639474 cri.go:89] found id: ""
	I1216 06:40:18.600062 1639474 logs.go:282] 0 containers: []
	W1216 06:40:18.600069 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:18.600077 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:18.600087 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:18.670680 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:18.670700 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:18.686391 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:18.686408 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:18.756196 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:18.747313   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:18.748097   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:18.749918   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:18.750488   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:18.752058   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:18.747313   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:18.748097   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:18.749918   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:18.750488   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:18.752058   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:18.756206 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:18.756218 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:18.824602 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:18.824623 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:21.356152 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:21.366658 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:21.366719 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:21.391945 1639474 cri.go:89] found id: ""
	I1216 06:40:21.391959 1639474 logs.go:282] 0 containers: []
	W1216 06:40:21.391966 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:21.391971 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:21.392032 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:21.419561 1639474 cri.go:89] found id: ""
	I1216 06:40:21.419581 1639474 logs.go:282] 0 containers: []
	W1216 06:40:21.419588 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:21.419593 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:21.419662 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:21.446105 1639474 cri.go:89] found id: ""
	I1216 06:40:21.446119 1639474 logs.go:282] 0 containers: []
	W1216 06:40:21.446135 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:21.446143 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:21.446212 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:21.472095 1639474 cri.go:89] found id: ""
	I1216 06:40:21.472110 1639474 logs.go:282] 0 containers: []
	W1216 06:40:21.472117 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:21.472123 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:21.472188 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:21.502751 1639474 cri.go:89] found id: ""
	I1216 06:40:21.502766 1639474 logs.go:282] 0 containers: []
	W1216 06:40:21.502773 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:21.502778 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:21.502841 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:21.528514 1639474 cri.go:89] found id: ""
	I1216 06:40:21.528538 1639474 logs.go:282] 0 containers: []
	W1216 06:40:21.528546 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:21.528551 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:21.528623 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:21.554279 1639474 cri.go:89] found id: ""
	I1216 06:40:21.554293 1639474 logs.go:282] 0 containers: []
	W1216 06:40:21.554300 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:21.554308 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:21.554319 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:21.622775 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:21.614774   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:21.615497   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:21.617104   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:21.617588   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:21.618999   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:21.614774   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:21.615497   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:21.617104   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:21.617588   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:21.618999   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:21.622786 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:21.622795 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:21.692973 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:21.692993 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:21.722066 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:21.722083 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:21.789953 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:21.789974 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:24.305740 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:24.315908 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:24.315976 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:24.344080 1639474 cri.go:89] found id: ""
	I1216 06:40:24.344095 1639474 logs.go:282] 0 containers: []
	W1216 06:40:24.344102 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:24.344108 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:24.344169 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:24.370623 1639474 cri.go:89] found id: ""
	I1216 06:40:24.370638 1639474 logs.go:282] 0 containers: []
	W1216 06:40:24.370645 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:24.370649 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:24.370714 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:24.397678 1639474 cri.go:89] found id: ""
	I1216 06:40:24.397701 1639474 logs.go:282] 0 containers: []
	W1216 06:40:24.397709 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:24.397714 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:24.397787 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:24.427585 1639474 cri.go:89] found id: ""
	I1216 06:40:24.427599 1639474 logs.go:282] 0 containers: []
	W1216 06:40:24.427607 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:24.427612 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:24.427685 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:24.457451 1639474 cri.go:89] found id: ""
	I1216 06:40:24.457465 1639474 logs.go:282] 0 containers: []
	W1216 06:40:24.457472 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:24.457489 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:24.457562 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:24.483717 1639474 cri.go:89] found id: ""
	I1216 06:40:24.483731 1639474 logs.go:282] 0 containers: []
	W1216 06:40:24.483738 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:24.483743 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:24.483817 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:24.509734 1639474 cri.go:89] found id: ""
	I1216 06:40:24.509748 1639474 logs.go:282] 0 containers: []
	W1216 06:40:24.509756 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:24.509763 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:24.509774 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:24.575490 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:24.575510 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:24.590459 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:24.590476 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:24.660840 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:24.649877   12107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:24.651257   12107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:24.652433   12107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:24.653590   12107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:24.656273   12107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:24.649877   12107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:24.651257   12107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:24.652433   12107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:24.653590   12107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:24.656273   12107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:24.660854 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:24.660865 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:24.742683 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:24.742706 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:27.272978 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:27.283654 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:27.283721 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:27.310045 1639474 cri.go:89] found id: ""
	I1216 06:40:27.310060 1639474 logs.go:282] 0 containers: []
	W1216 06:40:27.310067 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:27.310072 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:27.310132 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:27.339319 1639474 cri.go:89] found id: ""
	I1216 06:40:27.339334 1639474 logs.go:282] 0 containers: []
	W1216 06:40:27.339342 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:27.339347 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:27.339409 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:27.366885 1639474 cri.go:89] found id: ""
	I1216 06:40:27.366901 1639474 logs.go:282] 0 containers: []
	W1216 06:40:27.366910 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:27.366915 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:27.366980 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:27.392968 1639474 cri.go:89] found id: ""
	I1216 06:40:27.392982 1639474 logs.go:282] 0 containers: []
	W1216 06:40:27.392989 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:27.392994 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:27.393072 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:27.425432 1639474 cri.go:89] found id: ""
	I1216 06:40:27.425446 1639474 logs.go:282] 0 containers: []
	W1216 06:40:27.425466 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:27.425471 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:27.425538 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:27.454875 1639474 cri.go:89] found id: ""
	I1216 06:40:27.454899 1639474 logs.go:282] 0 containers: []
	W1216 06:40:27.454906 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:27.454912 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:27.454982 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:27.480348 1639474 cri.go:89] found id: ""
	I1216 06:40:27.480363 1639474 logs.go:282] 0 containers: []
	W1216 06:40:27.480370 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:27.480378 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:27.480389 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:27.550687 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:27.550715 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:27.566692 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:27.566711 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:27.634204 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:27.625961   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:27.626967   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:27.628010   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:27.628680   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:27.630265   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:27.625961   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:27.626967   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:27.628010   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:27.628680   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:27.630265   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:27.634214 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:27.634227 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:27.706020 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:27.706040 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:30.238169 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:30.248488 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:30.248550 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:30.274527 1639474 cri.go:89] found id: ""
	I1216 06:40:30.274542 1639474 logs.go:282] 0 containers: []
	W1216 06:40:30.274549 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:30.274554 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:30.274615 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:30.300592 1639474 cri.go:89] found id: ""
	I1216 06:40:30.300610 1639474 logs.go:282] 0 containers: []
	W1216 06:40:30.300617 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:30.300624 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:30.300693 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:30.327939 1639474 cri.go:89] found id: ""
	I1216 06:40:30.327966 1639474 logs.go:282] 0 containers: []
	W1216 06:40:30.327973 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:30.327978 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:30.328040 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:30.358884 1639474 cri.go:89] found id: ""
	I1216 06:40:30.358898 1639474 logs.go:282] 0 containers: []
	W1216 06:40:30.358905 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:30.358910 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:30.358968 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:30.387991 1639474 cri.go:89] found id: ""
	I1216 06:40:30.388005 1639474 logs.go:282] 0 containers: []
	W1216 06:40:30.388012 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:30.388017 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:30.388090 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:30.413034 1639474 cri.go:89] found id: ""
	I1216 06:40:30.413048 1639474 logs.go:282] 0 containers: []
	W1216 06:40:30.413055 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:30.413059 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:30.413118 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:30.449975 1639474 cri.go:89] found id: ""
	I1216 06:40:30.450018 1639474 logs.go:282] 0 containers: []
	W1216 06:40:30.450034 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:30.450041 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:30.450053 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:30.466503 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:30.466521 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:30.528819 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:30.520846   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:30.521380   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:30.522897   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:30.523339   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:30.524879   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:30.520846   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:30.521380   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:30.522897   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:30.523339   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:30.524879   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:30.528828 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:30.528839 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:30.597696 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:30.597715 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:30.625300 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:30.625317 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:33.194250 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:33.204305 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:33.204368 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:33.229739 1639474 cri.go:89] found id: ""
	I1216 06:40:33.229753 1639474 logs.go:282] 0 containers: []
	W1216 06:40:33.229760 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:33.229765 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:33.229821 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:33.254131 1639474 cri.go:89] found id: ""
	I1216 06:40:33.254144 1639474 logs.go:282] 0 containers: []
	W1216 06:40:33.254151 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:33.254156 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:33.254214 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:33.279859 1639474 cri.go:89] found id: ""
	I1216 06:40:33.279881 1639474 logs.go:282] 0 containers: []
	W1216 06:40:33.279889 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:33.279894 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:33.279956 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:33.305951 1639474 cri.go:89] found id: ""
	I1216 06:40:33.305966 1639474 logs.go:282] 0 containers: []
	W1216 06:40:33.305973 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:33.305978 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:33.306037 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:33.335767 1639474 cri.go:89] found id: ""
	I1216 06:40:33.335781 1639474 logs.go:282] 0 containers: []
	W1216 06:40:33.335789 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:33.335793 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:33.335859 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:33.362761 1639474 cri.go:89] found id: ""
	I1216 06:40:33.362774 1639474 logs.go:282] 0 containers: []
	W1216 06:40:33.362781 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:33.362786 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:33.362843 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:33.389319 1639474 cri.go:89] found id: ""
	I1216 06:40:33.389334 1639474 logs.go:282] 0 containers: []
	W1216 06:40:33.389340 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:33.389348 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:33.389359 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:33.453913 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:33.444788   12421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:33.445454   12421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:33.447138   12421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:33.447727   12421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:33.449700   12421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:33.444788   12421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:33.445454   12421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:33.447138   12421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:33.447727   12421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:33.449700   12421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:33.453925 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:33.453936 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:33.522875 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:33.522895 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:33.556966 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:33.556981 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:33.624329 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:33.624350 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:36.139596 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:36.150559 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:36.150621 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:36.176931 1639474 cri.go:89] found id: ""
	I1216 06:40:36.176946 1639474 logs.go:282] 0 containers: []
	W1216 06:40:36.176954 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:36.176959 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:36.177023 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:36.203410 1639474 cri.go:89] found id: ""
	I1216 06:40:36.203424 1639474 logs.go:282] 0 containers: []
	W1216 06:40:36.203430 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:36.203435 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:36.203498 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:36.232378 1639474 cri.go:89] found id: ""
	I1216 06:40:36.232393 1639474 logs.go:282] 0 containers: []
	W1216 06:40:36.232399 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:36.232407 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:36.232504 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:36.258614 1639474 cri.go:89] found id: ""
	I1216 06:40:36.258636 1639474 logs.go:282] 0 containers: []
	W1216 06:40:36.258644 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:36.258649 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:36.258711 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:36.287134 1639474 cri.go:89] found id: ""
	I1216 06:40:36.287149 1639474 logs.go:282] 0 containers: []
	W1216 06:40:36.287156 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:36.287161 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:36.287225 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:36.316901 1639474 cri.go:89] found id: ""
	I1216 06:40:36.316915 1639474 logs.go:282] 0 containers: []
	W1216 06:40:36.316922 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:36.316927 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:36.316991 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:36.343964 1639474 cri.go:89] found id: ""
	I1216 06:40:36.343979 1639474 logs.go:282] 0 containers: []
	W1216 06:40:36.343988 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:36.343997 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:36.344009 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:36.409151 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:36.400502   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:36.401298   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:36.402984   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:36.403504   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:36.405133   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:36.400502   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:36.401298   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:36.402984   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:36.403504   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:36.405133   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:36.409161 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:36.409172 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:36.477694 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:36.477717 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:36.507334 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:36.507355 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:36.577747 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:36.577766 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:39.094282 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:39.105025 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:39.105089 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:39.131493 1639474 cri.go:89] found id: ""
	I1216 06:40:39.131507 1639474 logs.go:282] 0 containers: []
	W1216 06:40:39.131514 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:39.131525 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:39.131586 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:39.163796 1639474 cri.go:89] found id: ""
	I1216 06:40:39.163811 1639474 logs.go:282] 0 containers: []
	W1216 06:40:39.163819 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:39.163823 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:39.163886 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:39.191137 1639474 cri.go:89] found id: ""
	I1216 06:40:39.191152 1639474 logs.go:282] 0 containers: []
	W1216 06:40:39.191160 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:39.191165 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:39.191226 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:39.217834 1639474 cri.go:89] found id: ""
	I1216 06:40:39.217850 1639474 logs.go:282] 0 containers: []
	W1216 06:40:39.217857 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:39.217862 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:39.217926 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:39.244937 1639474 cri.go:89] found id: ""
	I1216 06:40:39.244951 1639474 logs.go:282] 0 containers: []
	W1216 06:40:39.244958 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:39.244963 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:39.245026 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:39.274684 1639474 cri.go:89] found id: ""
	I1216 06:40:39.274698 1639474 logs.go:282] 0 containers: []
	W1216 06:40:39.274706 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:39.274711 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:39.274774 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:39.302124 1639474 cri.go:89] found id: ""
	I1216 06:40:39.302138 1639474 logs.go:282] 0 containers: []
	W1216 06:40:39.302145 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:39.302153 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:39.302163 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:39.370146 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:39.370166 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:39.397930 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:39.397946 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:39.469905 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:39.469925 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:39.487153 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:39.487169 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:39.556831 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:39.547994   12655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:39.548793   12655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:39.550966   12655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:39.551537   12655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:39.552926   12655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:39.547994   12655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:39.548793   12655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:39.550966   12655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:39.551537   12655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:39.552926   12655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:42.057113 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:42.068649 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:42.068719 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:42.098202 1639474 cri.go:89] found id: ""
	I1216 06:40:42.098217 1639474 logs.go:282] 0 containers: []
	W1216 06:40:42.098224 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:42.098229 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:42.098294 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:42.130680 1639474 cri.go:89] found id: ""
	I1216 06:40:42.130696 1639474 logs.go:282] 0 containers: []
	W1216 06:40:42.130703 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:42.130708 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:42.130779 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:42.167131 1639474 cri.go:89] found id: ""
	I1216 06:40:42.167146 1639474 logs.go:282] 0 containers: []
	W1216 06:40:42.167153 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:42.167160 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:42.167230 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:42.197324 1639474 cri.go:89] found id: ""
	I1216 06:40:42.197339 1639474 logs.go:282] 0 containers: []
	W1216 06:40:42.197346 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:42.197352 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:42.197420 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:42.225831 1639474 cri.go:89] found id: ""
	I1216 06:40:42.225848 1639474 logs.go:282] 0 containers: []
	W1216 06:40:42.225856 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:42.225861 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:42.225930 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:42.257762 1639474 cri.go:89] found id: ""
	I1216 06:40:42.257777 1639474 logs.go:282] 0 containers: []
	W1216 06:40:42.257786 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:42.257792 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:42.257852 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:42.284492 1639474 cri.go:89] found id: ""
	I1216 06:40:42.284507 1639474 logs.go:282] 0 containers: []
	W1216 06:40:42.284515 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:42.284523 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:42.284535 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:42.351298 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:42.351319 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:42.367176 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:42.367193 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:42.433375 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:42.424339   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:42.425590   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:42.426458   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:42.427469   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:42.429024   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:42.424339   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:42.425590   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:42.426458   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:42.427469   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:42.429024   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:42.433386 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:42.433396 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:42.500708 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:42.500729 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:45.031368 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:45.055503 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:45.055570 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:45.098074 1639474 cri.go:89] found id: ""
	I1216 06:40:45.098091 1639474 logs.go:282] 0 containers: []
	W1216 06:40:45.098100 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:45.098105 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:45.098174 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:45.144642 1639474 cri.go:89] found id: ""
	I1216 06:40:45.144658 1639474 logs.go:282] 0 containers: []
	W1216 06:40:45.144666 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:45.144671 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:45.144743 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:45.177748 1639474 cri.go:89] found id: ""
	I1216 06:40:45.177777 1639474 logs.go:282] 0 containers: []
	W1216 06:40:45.177786 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:45.177792 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:45.177875 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:45.237332 1639474 cri.go:89] found id: ""
	I1216 06:40:45.237350 1639474 logs.go:282] 0 containers: []
	W1216 06:40:45.237368 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:45.237373 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:45.237462 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:45.277580 1639474 cri.go:89] found id: ""
	I1216 06:40:45.277608 1639474 logs.go:282] 0 containers: []
	W1216 06:40:45.277625 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:45.277631 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:45.277787 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:45.319169 1639474 cri.go:89] found id: ""
	I1216 06:40:45.319184 1639474 logs.go:282] 0 containers: []
	W1216 06:40:45.319192 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:45.319198 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:45.319268 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:45.355649 1639474 cri.go:89] found id: ""
	I1216 06:40:45.355663 1639474 logs.go:282] 0 containers: []
	W1216 06:40:45.355672 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:45.355691 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:45.355723 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:45.423762 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:45.423783 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:45.451985 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:45.452002 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:45.516593 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:45.516613 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:45.531478 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:45.531500 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:45.596800 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:45.588341   12868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:45.588774   12868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:45.590507   12868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:45.590979   12868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:45.592493   12868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:45.588341   12868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:45.588774   12868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:45.590507   12868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:45.590979   12868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:45.592493   12868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:48.098483 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:48.108786 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:48.108849 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:48.134211 1639474 cri.go:89] found id: ""
	I1216 06:40:48.134225 1639474 logs.go:282] 0 containers: []
	W1216 06:40:48.134232 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:48.134237 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:48.134297 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:48.160517 1639474 cri.go:89] found id: ""
	I1216 06:40:48.160531 1639474 logs.go:282] 0 containers: []
	W1216 06:40:48.160538 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:48.160544 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:48.160604 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:48.185669 1639474 cri.go:89] found id: ""
	I1216 06:40:48.185682 1639474 logs.go:282] 0 containers: []
	W1216 06:40:48.185690 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:48.185694 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:48.185754 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:48.210265 1639474 cri.go:89] found id: ""
	I1216 06:40:48.210279 1639474 logs.go:282] 0 containers: []
	W1216 06:40:48.210286 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:48.210291 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:48.210403 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:48.234252 1639474 cri.go:89] found id: ""
	I1216 06:40:48.234267 1639474 logs.go:282] 0 containers: []
	W1216 06:40:48.234274 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:48.234279 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:48.234339 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:48.259358 1639474 cri.go:89] found id: ""
	I1216 06:40:48.259372 1639474 logs.go:282] 0 containers: []
	W1216 06:40:48.259379 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:48.259384 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:48.259443 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:48.288697 1639474 cri.go:89] found id: ""
	I1216 06:40:48.288713 1639474 logs.go:282] 0 containers: []
	W1216 06:40:48.288720 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:48.288728 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:48.288738 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:48.357686 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:48.357712 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:48.372954 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:48.372973 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:48.434679 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:48.426723   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:48.427402   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:48.428895   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:48.429341   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:48.430781   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:48.426723   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:48.427402   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:48.428895   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:48.429341   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:48.430781   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:48.434689 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:48.434701 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:48.505103 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:48.505127 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:51.033411 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:51.043540 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:51.043600 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:51.070010 1639474 cri.go:89] found id: ""
	I1216 06:40:51.070025 1639474 logs.go:282] 0 containers: []
	W1216 06:40:51.070032 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:51.070037 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:51.070100 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:51.096267 1639474 cri.go:89] found id: ""
	I1216 06:40:51.096282 1639474 logs.go:282] 0 containers: []
	W1216 06:40:51.096290 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:51.096295 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:51.096356 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:51.122692 1639474 cri.go:89] found id: ""
	I1216 06:40:51.122707 1639474 logs.go:282] 0 containers: []
	W1216 06:40:51.122714 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:51.122719 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:51.122784 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:51.152647 1639474 cri.go:89] found id: ""
	I1216 06:40:51.152662 1639474 logs.go:282] 0 containers: []
	W1216 06:40:51.152670 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:51.152680 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:51.152744 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:51.180574 1639474 cri.go:89] found id: ""
	I1216 06:40:51.180589 1639474 logs.go:282] 0 containers: []
	W1216 06:40:51.180597 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:51.180602 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:51.180668 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:51.206605 1639474 cri.go:89] found id: ""
	I1216 06:40:51.206619 1639474 logs.go:282] 0 containers: []
	W1216 06:40:51.206626 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:51.206631 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:51.206695 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:51.231786 1639474 cri.go:89] found id: ""
	I1216 06:40:51.231809 1639474 logs.go:282] 0 containers: []
	W1216 06:40:51.231817 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:51.231825 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:51.231835 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:51.297100 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:51.297120 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:51.311954 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:51.311972 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:51.379683 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:51.371735   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:51.372335   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:51.373907   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:51.374265   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:51.375750   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:51.371735   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:51.372335   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:51.373907   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:51.374265   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:51.375750   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:51.379694 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:51.379706 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:51.447537 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:51.447557 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:53.983520 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:53.993929 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:53.993987 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:54.023619 1639474 cri.go:89] found id: ""
	I1216 06:40:54.023634 1639474 logs.go:282] 0 containers: []
	W1216 06:40:54.023640 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:54.023645 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:54.023708 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:54.049842 1639474 cri.go:89] found id: ""
	I1216 06:40:54.049857 1639474 logs.go:282] 0 containers: []
	W1216 06:40:54.049864 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:54.049869 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:54.049934 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:54.077181 1639474 cri.go:89] found id: ""
	I1216 06:40:54.077205 1639474 logs.go:282] 0 containers: []
	W1216 06:40:54.077212 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:54.077217 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:54.077280 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:54.105267 1639474 cri.go:89] found id: ""
	I1216 06:40:54.105282 1639474 logs.go:282] 0 containers: []
	W1216 06:40:54.105291 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:54.105297 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:54.105363 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:54.130851 1639474 cri.go:89] found id: ""
	I1216 06:40:54.130874 1639474 logs.go:282] 0 containers: []
	W1216 06:40:54.130881 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:54.130886 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:54.130949 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:54.156895 1639474 cri.go:89] found id: ""
	I1216 06:40:54.156910 1639474 logs.go:282] 0 containers: []
	W1216 06:40:54.156917 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:54.156923 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:54.156983 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:54.183545 1639474 cri.go:89] found id: ""
	I1216 06:40:54.183560 1639474 logs.go:282] 0 containers: []
	W1216 06:40:54.183566 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:54.183574 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:54.183584 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:54.249489 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:54.249509 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:54.263930 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:54.263947 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:54.329743 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:54.321698   13175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:54.322538   13175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:54.324144   13175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:54.324622   13175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:54.326115   13175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:54.321698   13175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:54.322538   13175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:54.324144   13175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:54.324622   13175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:54.326115   13175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:54.329755 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:54.329766 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:54.396582 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:54.396603 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:56.928591 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:56.939856 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:56.939917 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:56.967210 1639474 cri.go:89] found id: ""
	I1216 06:40:56.967225 1639474 logs.go:282] 0 containers: []
	W1216 06:40:56.967232 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:56.967237 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:56.967298 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:56.993815 1639474 cri.go:89] found id: ""
	I1216 06:40:56.993829 1639474 logs.go:282] 0 containers: []
	W1216 06:40:56.993836 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:56.993841 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:56.993898 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:57.029670 1639474 cri.go:89] found id: ""
	I1216 06:40:57.029684 1639474 logs.go:282] 0 containers: []
	W1216 06:40:57.029691 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:57.029696 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:57.029754 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:57.054833 1639474 cri.go:89] found id: ""
	I1216 06:40:57.054847 1639474 logs.go:282] 0 containers: []
	W1216 06:40:57.054854 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:57.054859 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:57.054924 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:57.079670 1639474 cri.go:89] found id: ""
	I1216 06:40:57.079684 1639474 logs.go:282] 0 containers: []
	W1216 06:40:57.079691 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:57.079696 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:57.079761 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:57.104048 1639474 cri.go:89] found id: ""
	I1216 06:40:57.104062 1639474 logs.go:282] 0 containers: []
	W1216 06:40:57.104069 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:57.104074 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:57.104142 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:57.129442 1639474 cri.go:89] found id: ""
	I1216 06:40:57.129462 1639474 logs.go:282] 0 containers: []
	W1216 06:40:57.129469 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:57.129477 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:57.129487 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:57.197165 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:57.197185 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:57.226479 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:57.226498 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:57.292031 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:57.292053 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:57.306889 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:57.306905 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:57.372214 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:57.363236   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:57.363924   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:57.365612   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:57.366208   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:57.367883   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:57.363236   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:57.363924   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:57.365612   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:57.366208   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:57.367883   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:59.872521 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:59.882455 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:59.882521 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:59.913998 1639474 cri.go:89] found id: ""
	I1216 06:40:59.914012 1639474 logs.go:282] 0 containers: []
	W1216 06:40:59.914020 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:59.914025 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:59.914091 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:59.942569 1639474 cri.go:89] found id: ""
	I1216 06:40:59.942583 1639474 logs.go:282] 0 containers: []
	W1216 06:40:59.942589 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:59.942594 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:59.942665 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:59.970700 1639474 cri.go:89] found id: ""
	I1216 06:40:59.970729 1639474 logs.go:282] 0 containers: []
	W1216 06:40:59.970736 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:59.970742 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:59.970809 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:59.997067 1639474 cri.go:89] found id: ""
	I1216 06:40:59.997085 1639474 logs.go:282] 0 containers: []
	W1216 06:40:59.997092 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:59.997098 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:59.997163 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:00.191988 1639474 cri.go:89] found id: ""
	I1216 06:41:00.192005 1639474 logs.go:282] 0 containers: []
	W1216 06:41:00.192013 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:00.192018 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:00.192086 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:00.277353 1639474 cri.go:89] found id: ""
	I1216 06:41:00.277369 1639474 logs.go:282] 0 containers: []
	W1216 06:41:00.277377 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:00.277382 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:00.277497 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:00.317655 1639474 cri.go:89] found id: ""
	I1216 06:41:00.317680 1639474 logs.go:282] 0 containers: []
	W1216 06:41:00.317688 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:00.317697 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:00.317710 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:00.373222 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:00.373244 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:00.450289 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:00.450312 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:00.467305 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:00.467321 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:00.537520 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:00.528959   13394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:00.529630   13394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:00.531328   13394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:00.531879   13394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:00.533548   13394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:00.528959   13394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:00.529630   13394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:00.531328   13394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:00.531879   13394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:00.533548   13394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:00.537529 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:00.537544 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:03.105837 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:03.116211 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:03.116271 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:03.140992 1639474 cri.go:89] found id: ""
	I1216 06:41:03.141005 1639474 logs.go:282] 0 containers: []
	W1216 06:41:03.141013 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:03.141018 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:03.141077 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:03.169832 1639474 cri.go:89] found id: ""
	I1216 06:41:03.169846 1639474 logs.go:282] 0 containers: []
	W1216 06:41:03.169853 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:03.169858 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:03.169923 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:03.200294 1639474 cri.go:89] found id: ""
	I1216 06:41:03.200308 1639474 logs.go:282] 0 containers: []
	W1216 06:41:03.200316 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:03.200321 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:03.200422 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:03.226615 1639474 cri.go:89] found id: ""
	I1216 06:41:03.226629 1639474 logs.go:282] 0 containers: []
	W1216 06:41:03.226635 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:03.226641 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:03.226702 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:03.252099 1639474 cri.go:89] found id: ""
	I1216 06:41:03.252113 1639474 logs.go:282] 0 containers: []
	W1216 06:41:03.252120 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:03.252125 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:03.252186 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:03.277049 1639474 cri.go:89] found id: ""
	I1216 06:41:03.277064 1639474 logs.go:282] 0 containers: []
	W1216 06:41:03.277070 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:03.277075 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:03.277136 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:03.302834 1639474 cri.go:89] found id: ""
	I1216 06:41:03.302850 1639474 logs.go:282] 0 containers: []
	W1216 06:41:03.302857 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:03.302865 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:03.302877 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:03.369696 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:03.369719 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:03.384336 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:03.384358 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:03.450962 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:03.442704   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:03.443315   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:03.445009   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:03.445434   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:03.446924   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:03.442704   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:03.443315   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:03.445009   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:03.445434   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:03.446924   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:03.450973 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:03.450985 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:03.522274 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:03.522297 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:06.053196 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:06.063351 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:06.063422 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:06.089075 1639474 cri.go:89] found id: ""
	I1216 06:41:06.089089 1639474 logs.go:282] 0 containers: []
	W1216 06:41:06.089096 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:06.089102 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:06.089162 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:06.118245 1639474 cri.go:89] found id: ""
	I1216 06:41:06.118259 1639474 logs.go:282] 0 containers: []
	W1216 06:41:06.118266 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:06.118271 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:06.118336 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:06.143697 1639474 cri.go:89] found id: ""
	I1216 06:41:06.143724 1639474 logs.go:282] 0 containers: []
	W1216 06:41:06.143732 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:06.143737 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:06.143805 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:06.169572 1639474 cri.go:89] found id: ""
	I1216 06:41:06.169586 1639474 logs.go:282] 0 containers: []
	W1216 06:41:06.169594 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:06.169599 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:06.169661 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:06.195851 1639474 cri.go:89] found id: ""
	I1216 06:41:06.195867 1639474 logs.go:282] 0 containers: []
	W1216 06:41:06.195874 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:06.195879 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:06.195942 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:06.223692 1639474 cri.go:89] found id: ""
	I1216 06:41:06.223707 1639474 logs.go:282] 0 containers: []
	W1216 06:41:06.223715 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:06.223720 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:06.223780 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:06.249649 1639474 cri.go:89] found id: ""
	I1216 06:41:06.249679 1639474 logs.go:282] 0 containers: []
	W1216 06:41:06.249686 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:06.249694 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:06.249705 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:06.314738 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:06.314759 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:06.329678 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:06.329695 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:06.395023 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:06.386200   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:06.387084   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:06.388942   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:06.389302   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:06.390896   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:06.386200   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:06.387084   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:06.388942   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:06.389302   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:06.390896   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:06.395034 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:06.395046 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:06.463667 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:06.463687 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:08.992603 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:09.003856 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:09.003937 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:09.031578 1639474 cri.go:89] found id: ""
	I1216 06:41:09.031592 1639474 logs.go:282] 0 containers: []
	W1216 06:41:09.031599 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:09.031604 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:09.031663 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:09.056946 1639474 cri.go:89] found id: ""
	I1216 06:41:09.056961 1639474 logs.go:282] 0 containers: []
	W1216 06:41:09.056969 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:09.056974 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:09.057035 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:09.082038 1639474 cri.go:89] found id: ""
	I1216 06:41:09.082053 1639474 logs.go:282] 0 containers: []
	W1216 06:41:09.082060 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:09.082065 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:09.082125 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:09.107847 1639474 cri.go:89] found id: ""
	I1216 06:41:09.107862 1639474 logs.go:282] 0 containers: []
	W1216 06:41:09.107869 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:09.107874 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:09.107933 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:09.133995 1639474 cri.go:89] found id: ""
	I1216 06:41:09.134010 1639474 logs.go:282] 0 containers: []
	W1216 06:41:09.134017 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:09.134022 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:09.134086 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:09.159110 1639474 cri.go:89] found id: ""
	I1216 06:41:09.159125 1639474 logs.go:282] 0 containers: []
	W1216 06:41:09.159132 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:09.159137 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:09.159197 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:09.189150 1639474 cri.go:89] found id: ""
	I1216 06:41:09.189164 1639474 logs.go:282] 0 containers: []
	W1216 06:41:09.189171 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:09.189179 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:09.189190 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:09.251080 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:09.242202   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:09.242596   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:09.244208   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:09.244880   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:09.246572   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:09.242202   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:09.242596   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:09.244208   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:09.244880   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:09.246572   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:09.251090 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:09.251102 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:09.318859 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:09.318879 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:09.349358 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:09.349381 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:09.418362 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:09.418385 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:11.933431 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:11.944248 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:11.944309 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:11.976909 1639474 cri.go:89] found id: ""
	I1216 06:41:11.976924 1639474 logs.go:282] 0 containers: []
	W1216 06:41:11.976932 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:11.976937 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:11.976998 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:12.011035 1639474 cri.go:89] found id: ""
	I1216 06:41:12.011050 1639474 logs.go:282] 0 containers: []
	W1216 06:41:12.011057 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:12.011062 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:12.011126 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:12.041473 1639474 cri.go:89] found id: ""
	I1216 06:41:12.041495 1639474 logs.go:282] 0 containers: []
	W1216 06:41:12.041502 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:12.041508 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:12.041571 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:12.066438 1639474 cri.go:89] found id: ""
	I1216 06:41:12.066463 1639474 logs.go:282] 0 containers: []
	W1216 06:41:12.066471 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:12.066477 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:12.066542 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:12.090884 1639474 cri.go:89] found id: ""
	I1216 06:41:12.090899 1639474 logs.go:282] 0 containers: []
	W1216 06:41:12.090906 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:12.090911 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:12.090970 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:12.116491 1639474 cri.go:89] found id: ""
	I1216 06:41:12.116506 1639474 logs.go:282] 0 containers: []
	W1216 06:41:12.116516 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:12.116522 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:12.116580 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:12.142941 1639474 cri.go:89] found id: ""
	I1216 06:41:12.142956 1639474 logs.go:282] 0 containers: []
	W1216 06:41:12.142963 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:12.142971 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:12.142982 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:12.172125 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:12.172142 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:12.240713 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:12.240734 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:12.255672 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:12.255689 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:12.321167 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:12.312200   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:12.313096   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:12.315001   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:12.315663   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:12.317261   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:12.312200   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:12.313096   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:12.315001   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:12.315663   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:12.317261   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:12.321177 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:12.321190 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:14.894286 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:14.904324 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:14.904383 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:14.938397 1639474 cri.go:89] found id: ""
	I1216 06:41:14.938421 1639474 logs.go:282] 0 containers: []
	W1216 06:41:14.938429 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:14.938434 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:14.938501 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:14.967116 1639474 cri.go:89] found id: ""
	I1216 06:41:14.967130 1639474 logs.go:282] 0 containers: []
	W1216 06:41:14.967137 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:14.967141 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:14.967203 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:14.993300 1639474 cri.go:89] found id: ""
	I1216 06:41:14.993324 1639474 logs.go:282] 0 containers: []
	W1216 06:41:14.993331 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:14.993336 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:14.993414 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:15.065324 1639474 cri.go:89] found id: ""
	I1216 06:41:15.065347 1639474 logs.go:282] 0 containers: []
	W1216 06:41:15.065374 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:15.065379 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:15.065453 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:15.094230 1639474 cri.go:89] found id: ""
	I1216 06:41:15.094254 1639474 logs.go:282] 0 containers: []
	W1216 06:41:15.094262 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:15.094268 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:15.094334 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:15.125543 1639474 cri.go:89] found id: ""
	I1216 06:41:15.125557 1639474 logs.go:282] 0 containers: []
	W1216 06:41:15.125567 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:15.125574 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:15.125641 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:15.153256 1639474 cri.go:89] found id: ""
	I1216 06:41:15.153271 1639474 logs.go:282] 0 containers: []
	W1216 06:41:15.153280 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:15.153287 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:15.153298 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:15.220613 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:15.220633 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:15.235620 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:15.235637 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:15.298217 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:15.289454   13906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:15.290253   13906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:15.291923   13906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:15.292609   13906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:15.294226   13906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:15.289454   13906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:15.290253   13906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:15.291923   13906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:15.292609   13906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:15.294226   13906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:15.298227 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:15.298238 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:15.366620 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:15.366643 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:17.896595 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:17.908386 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:17.908446 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:17.937743 1639474 cri.go:89] found id: ""
	I1216 06:41:17.937757 1639474 logs.go:282] 0 containers: []
	W1216 06:41:17.937763 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:17.937768 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:17.937827 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:17.970369 1639474 cri.go:89] found id: ""
	I1216 06:41:17.970383 1639474 logs.go:282] 0 containers: []
	W1216 06:41:17.970390 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:17.970395 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:17.970453 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:17.996832 1639474 cri.go:89] found id: ""
	I1216 06:41:17.996846 1639474 logs.go:282] 0 containers: []
	W1216 06:41:17.996853 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:17.996858 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:17.996924 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:18.038145 1639474 cri.go:89] found id: ""
	I1216 06:41:18.038159 1639474 logs.go:282] 0 containers: []
	W1216 06:41:18.038167 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:18.038172 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:18.038235 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:18.064225 1639474 cri.go:89] found id: ""
	I1216 06:41:18.064239 1639474 logs.go:282] 0 containers: []
	W1216 06:41:18.064248 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:18.064254 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:18.064314 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:18.094775 1639474 cri.go:89] found id: ""
	I1216 06:41:18.094789 1639474 logs.go:282] 0 containers: []
	W1216 06:41:18.094797 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:18.094802 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:18.094863 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:18.120874 1639474 cri.go:89] found id: ""
	I1216 06:41:18.120888 1639474 logs.go:282] 0 containers: []
	W1216 06:41:18.120895 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:18.120903 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:18.120913 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:18.188407 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:18.188429 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:18.221279 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:18.221295 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:18.288107 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:18.288129 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:18.303324 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:18.303342 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:18.371049 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:18.362924   14025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:18.363610   14025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:18.365170   14025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:18.365582   14025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:18.367111   14025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:18.362924   14025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:18.363610   14025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:18.365170   14025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:18.365582   14025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:18.367111   14025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:20.871320 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:20.881458 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:20.881519 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:20.910690 1639474 cri.go:89] found id: ""
	I1216 06:41:20.910704 1639474 logs.go:282] 0 containers: []
	W1216 06:41:20.910711 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:20.910716 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:20.910778 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:20.940115 1639474 cri.go:89] found id: ""
	I1216 06:41:20.940131 1639474 logs.go:282] 0 containers: []
	W1216 06:41:20.940138 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:20.940144 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:20.940205 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:20.971890 1639474 cri.go:89] found id: ""
	I1216 06:41:20.971904 1639474 logs.go:282] 0 containers: []
	W1216 06:41:20.971911 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:20.971916 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:20.971973 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:20.997611 1639474 cri.go:89] found id: ""
	I1216 06:41:20.997627 1639474 logs.go:282] 0 containers: []
	W1216 06:41:20.997634 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:20.997639 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:20.997714 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:21.028905 1639474 cri.go:89] found id: ""
	I1216 06:41:21.028919 1639474 logs.go:282] 0 containers: []
	W1216 06:41:21.028926 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:21.028931 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:21.028990 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:21.055176 1639474 cri.go:89] found id: ""
	I1216 06:41:21.055190 1639474 logs.go:282] 0 containers: []
	W1216 06:41:21.055197 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:21.055202 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:21.055262 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:21.081697 1639474 cri.go:89] found id: ""
	I1216 06:41:21.081712 1639474 logs.go:282] 0 containers: []
	W1216 06:41:21.081719 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:21.081727 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:21.081738 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:21.148234 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:21.148255 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:21.164172 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:21.164192 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:21.228352 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:21.219814   14118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:21.220709   14118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:21.222449   14118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:21.222766   14118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:21.224337   14118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:21.219814   14118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:21.220709   14118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:21.222449   14118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:21.222766   14118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:21.224337   14118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:21.228362 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:21.228374 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:21.295358 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:21.295378 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:23.826021 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:23.836732 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:23.836794 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:23.865987 1639474 cri.go:89] found id: ""
	I1216 06:41:23.866001 1639474 logs.go:282] 0 containers: []
	W1216 06:41:23.866008 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:23.866013 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:23.866073 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:23.891393 1639474 cri.go:89] found id: ""
	I1216 06:41:23.891408 1639474 logs.go:282] 0 containers: []
	W1216 06:41:23.891415 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:23.891420 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:23.891486 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:23.918388 1639474 cri.go:89] found id: ""
	I1216 06:41:23.918403 1639474 logs.go:282] 0 containers: []
	W1216 06:41:23.918410 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:23.918415 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:23.918475 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:23.961374 1639474 cri.go:89] found id: ""
	I1216 06:41:23.961390 1639474 logs.go:282] 0 containers: []
	W1216 06:41:23.961397 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:23.961402 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:23.961461 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:23.987162 1639474 cri.go:89] found id: ""
	I1216 06:41:23.987176 1639474 logs.go:282] 0 containers: []
	W1216 06:41:23.987184 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:23.987195 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:23.987257 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:24.016111 1639474 cri.go:89] found id: ""
	I1216 06:41:24.016127 1639474 logs.go:282] 0 containers: []
	W1216 06:41:24.016134 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:24.016139 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:24.016202 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:24.043481 1639474 cri.go:89] found id: ""
	I1216 06:41:24.043495 1639474 logs.go:282] 0 containers: []
	W1216 06:41:24.043503 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:24.043511 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:24.043521 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:24.111316 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:24.102100   14216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:24.103028   14216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:24.105013   14216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:24.105610   14216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:24.107298   14216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:24.102100   14216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:24.103028   14216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:24.105013   14216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:24.105610   14216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:24.107298   14216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:24.111326 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:24.111338 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:24.178630 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:24.178650 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:24.213388 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:24.213405 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:24.283269 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:24.283290 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:26.798616 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:26.808720 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:26.808786 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:26.834419 1639474 cri.go:89] found id: ""
	I1216 06:41:26.834433 1639474 logs.go:282] 0 containers: []
	W1216 06:41:26.834451 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:26.834457 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:26.834530 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:26.860230 1639474 cri.go:89] found id: ""
	I1216 06:41:26.860244 1639474 logs.go:282] 0 containers: []
	W1216 06:41:26.860251 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:26.860256 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:26.860316 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:26.886841 1639474 cri.go:89] found id: ""
	I1216 06:41:26.886856 1639474 logs.go:282] 0 containers: []
	W1216 06:41:26.886863 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:26.886868 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:26.886934 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:26.933097 1639474 cri.go:89] found id: ""
	I1216 06:41:26.933121 1639474 logs.go:282] 0 containers: []
	W1216 06:41:26.933129 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:26.933134 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:26.933201 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:26.967219 1639474 cri.go:89] found id: ""
	I1216 06:41:26.967233 1639474 logs.go:282] 0 containers: []
	W1216 06:41:26.967241 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:26.967258 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:26.967319 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:27.008045 1639474 cri.go:89] found id: ""
	I1216 06:41:27.008074 1639474 logs.go:282] 0 containers: []
	W1216 06:41:27.008082 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:27.008088 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:27.008156 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:27.034453 1639474 cri.go:89] found id: ""
	I1216 06:41:27.034469 1639474 logs.go:282] 0 containers: []
	W1216 06:41:27.034476 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:27.034484 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:27.034507 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:27.104223 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:27.104245 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:27.119468 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:27.119487 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:27.188973 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:27.180080   14327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:27.181032   14327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:27.182948   14327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:27.183274   14327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:27.184949   14327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:27.180080   14327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:27.181032   14327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:27.182948   14327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:27.183274   14327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:27.184949   14327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:27.188983 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:27.188994 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:27.258008 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:27.258028 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:29.786955 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:29.797122 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:29.797184 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:29.824207 1639474 cri.go:89] found id: ""
	I1216 06:41:29.824221 1639474 logs.go:282] 0 containers: []
	W1216 06:41:29.824228 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:29.824233 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:29.824290 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:29.850615 1639474 cri.go:89] found id: ""
	I1216 06:41:29.850630 1639474 logs.go:282] 0 containers: []
	W1216 06:41:29.850636 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:29.850641 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:29.850703 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:29.876387 1639474 cri.go:89] found id: ""
	I1216 06:41:29.876401 1639474 logs.go:282] 0 containers: []
	W1216 06:41:29.876408 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:29.876413 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:29.876498 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:29.907653 1639474 cri.go:89] found id: ""
	I1216 06:41:29.907667 1639474 logs.go:282] 0 containers: []
	W1216 06:41:29.907674 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:29.907678 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:29.907735 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:29.944219 1639474 cri.go:89] found id: ""
	I1216 06:41:29.944233 1639474 logs.go:282] 0 containers: []
	W1216 06:41:29.944239 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:29.944244 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:29.944302 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:29.976007 1639474 cri.go:89] found id: ""
	I1216 06:41:29.976021 1639474 logs.go:282] 0 containers: []
	W1216 06:41:29.976029 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:29.976033 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:29.976095 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:30.024272 1639474 cri.go:89] found id: ""
	I1216 06:41:30.024289 1639474 logs.go:282] 0 containers: []
	W1216 06:41:30.024297 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:30.024306 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:30.024322 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:30.119806 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:30.119827 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:30.136379 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:30.136400 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:30.205690 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:30.196345   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:30.197016   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:30.198788   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:30.199535   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:30.201508   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:30.196345   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:30.197016   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:30.198788   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:30.199535   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:30.201508   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:30.205700 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:30.205723 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:30.274216 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:30.274240 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:32.809139 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:32.819371 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:32.819431 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:32.847039 1639474 cri.go:89] found id: ""
	I1216 06:41:32.847054 1639474 logs.go:282] 0 containers: []
	W1216 06:41:32.847065 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:32.847070 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:32.847138 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:32.875215 1639474 cri.go:89] found id: ""
	I1216 06:41:32.875229 1639474 logs.go:282] 0 containers: []
	W1216 06:41:32.875236 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:32.875240 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:32.875300 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:32.907300 1639474 cri.go:89] found id: ""
	I1216 06:41:32.907314 1639474 logs.go:282] 0 containers: []
	W1216 06:41:32.907321 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:32.907326 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:32.907381 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:32.938860 1639474 cri.go:89] found id: ""
	I1216 06:41:32.938874 1639474 logs.go:282] 0 containers: []
	W1216 06:41:32.938881 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:32.938886 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:32.938942 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:32.971352 1639474 cri.go:89] found id: ""
	I1216 06:41:32.971366 1639474 logs.go:282] 0 containers: []
	W1216 06:41:32.971374 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:32.971379 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:32.971436 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:33.012516 1639474 cri.go:89] found id: ""
	I1216 06:41:33.012531 1639474 logs.go:282] 0 containers: []
	W1216 06:41:33.012538 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:33.012543 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:33.012622 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:33.041830 1639474 cri.go:89] found id: ""
	I1216 06:41:33.041844 1639474 logs.go:282] 0 containers: []
	W1216 06:41:33.041851 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:33.041859 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:33.041869 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:33.107636 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:33.107656 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:33.122787 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:33.122803 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:33.191649 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:33.182880   14537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:33.183594   14537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:33.185187   14537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:33.185934   14537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:33.187632   14537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:33.182880   14537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:33.183594   14537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:33.185187   14537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:33.185934   14537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:33.187632   14537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:33.191659 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:33.191682 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:33.263447 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:33.263474 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:35.794998 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:35.805176 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:35.805236 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:35.831135 1639474 cri.go:89] found id: ""
	I1216 06:41:35.831149 1639474 logs.go:282] 0 containers: []
	W1216 06:41:35.831156 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:35.831161 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:35.831223 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:35.860254 1639474 cri.go:89] found id: ""
	I1216 06:41:35.860281 1639474 logs.go:282] 0 containers: []
	W1216 06:41:35.860289 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:35.860294 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:35.860360 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:35.887306 1639474 cri.go:89] found id: ""
	I1216 06:41:35.887320 1639474 logs.go:282] 0 containers: []
	W1216 06:41:35.887327 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:35.887333 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:35.887391 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:35.917653 1639474 cri.go:89] found id: ""
	I1216 06:41:35.917668 1639474 logs.go:282] 0 containers: []
	W1216 06:41:35.917690 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:35.917696 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:35.917763 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:35.959523 1639474 cri.go:89] found id: ""
	I1216 06:41:35.959546 1639474 logs.go:282] 0 containers: []
	W1216 06:41:35.959553 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:35.959558 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:35.959629 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:35.989044 1639474 cri.go:89] found id: ""
	I1216 06:41:35.989062 1639474 logs.go:282] 0 containers: []
	W1216 06:41:35.989069 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:35.989077 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:35.989138 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:36.024859 1639474 cri.go:89] found id: ""
	I1216 06:41:36.024875 1639474 logs.go:282] 0 containers: []
	W1216 06:41:36.024885 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:36.024895 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:36.024912 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:36.056878 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:36.056896 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:36.121811 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:36.121834 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:36.137437 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:36.137455 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:36.205908 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:36.196720   14657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:36.197549   14657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:36.199375   14657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:36.199759   14657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:36.201454   14657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:36.196720   14657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:36.197549   14657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:36.199375   14657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:36.199759   14657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:36.201454   14657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:36.205920 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:36.205931 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:38.776930 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:38.786842 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:38.786902 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:38.812622 1639474 cri.go:89] found id: ""
	I1216 06:41:38.812637 1639474 logs.go:282] 0 containers: []
	W1216 06:41:38.812644 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:38.812649 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:38.812705 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:38.838434 1639474 cri.go:89] found id: ""
	I1216 06:41:38.838448 1639474 logs.go:282] 0 containers: []
	W1216 06:41:38.838456 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:38.838461 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:38.838523 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:38.863392 1639474 cri.go:89] found id: ""
	I1216 06:41:38.863407 1639474 logs.go:282] 0 containers: []
	W1216 06:41:38.863414 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:38.863419 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:38.863479 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:38.888908 1639474 cri.go:89] found id: ""
	I1216 06:41:38.888922 1639474 logs.go:282] 0 containers: []
	W1216 06:41:38.888929 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:38.888934 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:38.888993 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:38.917217 1639474 cri.go:89] found id: ""
	I1216 06:41:38.917247 1639474 logs.go:282] 0 containers: []
	W1216 06:41:38.917255 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:38.917260 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:38.917340 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:38.951610 1639474 cri.go:89] found id: ""
	I1216 06:41:38.951623 1639474 logs.go:282] 0 containers: []
	W1216 06:41:38.951630 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:38.951645 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:38.951706 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:38.982144 1639474 cri.go:89] found id: ""
	I1216 06:41:38.982158 1639474 logs.go:282] 0 containers: []
	W1216 06:41:38.982165 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:38.982173 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:38.982184 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:39.051829 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:39.043703   14748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:39.044349   14748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:39.045933   14748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:39.046368   14748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:39.047868   14748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:39.043703   14748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:39.044349   14748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:39.045933   14748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:39.046368   14748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:39.047868   14748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:39.051839 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:39.051860 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:39.125701 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:39.125723 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:39.157087 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:39.157104 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:39.225477 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:39.225498 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:41.740919 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:41.751149 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:41.751211 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:41.776245 1639474 cri.go:89] found id: ""
	I1216 06:41:41.776259 1639474 logs.go:282] 0 containers: []
	W1216 06:41:41.776266 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:41.776271 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:41.776330 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:41.801530 1639474 cri.go:89] found id: ""
	I1216 06:41:41.801543 1639474 logs.go:282] 0 containers: []
	W1216 06:41:41.801556 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:41.801561 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:41.801619 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:41.826287 1639474 cri.go:89] found id: ""
	I1216 06:41:41.826300 1639474 logs.go:282] 0 containers: []
	W1216 06:41:41.826307 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:41.826312 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:41.826368 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:41.855404 1639474 cri.go:89] found id: ""
	I1216 06:41:41.855419 1639474 logs.go:282] 0 containers: []
	W1216 06:41:41.855426 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:41.855431 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:41.855490 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:41.883079 1639474 cri.go:89] found id: ""
	I1216 06:41:41.883093 1639474 logs.go:282] 0 containers: []
	W1216 06:41:41.883100 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:41.883104 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:41.883162 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:41.924362 1639474 cri.go:89] found id: ""
	I1216 06:41:41.924376 1639474 logs.go:282] 0 containers: []
	W1216 06:41:41.924393 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:41.924399 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:41.924503 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:41.958054 1639474 cri.go:89] found id: ""
	I1216 06:41:41.958069 1639474 logs.go:282] 0 containers: []
	W1216 06:41:41.958076 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:41.958083 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:41.958093 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:42.031093 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:42.022513   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:42.023465   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:42.024526   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:42.025029   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:42.026770   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:42.022513   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:42.023465   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:42.024526   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:42.025029   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:42.026770   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:42.031104 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:42.031117 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:42.098938 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:42.098961 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:42.132662 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:42.132681 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:42.206635 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:42.206658 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:44.725533 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:44.735690 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:44.735751 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:44.764539 1639474 cri.go:89] found id: ""
	I1216 06:41:44.764554 1639474 logs.go:282] 0 containers: []
	W1216 06:41:44.764561 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:44.764566 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:44.764624 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:44.789462 1639474 cri.go:89] found id: ""
	I1216 06:41:44.789476 1639474 logs.go:282] 0 containers: []
	W1216 06:41:44.789483 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:44.789487 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:44.789550 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:44.813863 1639474 cri.go:89] found id: ""
	I1216 06:41:44.813877 1639474 logs.go:282] 0 containers: []
	W1216 06:41:44.813884 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:44.813889 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:44.813948 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:44.842990 1639474 cri.go:89] found id: ""
	I1216 06:41:44.843006 1639474 logs.go:282] 0 containers: []
	W1216 06:41:44.843013 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:44.843018 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:44.843076 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:44.868986 1639474 cri.go:89] found id: ""
	I1216 06:41:44.869000 1639474 logs.go:282] 0 containers: []
	W1216 06:41:44.869006 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:44.869013 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:44.869070 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:44.897735 1639474 cri.go:89] found id: ""
	I1216 06:41:44.897759 1639474 logs.go:282] 0 containers: []
	W1216 06:41:44.897767 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:44.897773 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:44.897840 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:44.927690 1639474 cri.go:89] found id: ""
	I1216 06:41:44.927715 1639474 logs.go:282] 0 containers: []
	W1216 06:41:44.927722 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:44.927730 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:44.927740 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:45.002166 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:45.002190 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:45.029027 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:45.029047 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:45.167411 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:45.147237   14960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:45.148177   14960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:45.151868   14960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:45.153460   14960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:45.154056   14960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:45.147237   14960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:45.148177   14960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:45.151868   14960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:45.153460   14960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:45.154056   14960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:45.167428 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:45.167448 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:45.247049 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:45.247076 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:47.787199 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:47.797629 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:47.797694 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:47.822803 1639474 cri.go:89] found id: ""
	I1216 06:41:47.822818 1639474 logs.go:282] 0 containers: []
	W1216 06:41:47.822825 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:47.822830 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:47.822894 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:47.848082 1639474 cri.go:89] found id: ""
	I1216 06:41:47.848109 1639474 logs.go:282] 0 containers: []
	W1216 06:41:47.848117 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:47.848122 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:47.848199 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:47.874407 1639474 cri.go:89] found id: ""
	I1216 06:41:47.874421 1639474 logs.go:282] 0 containers: []
	W1216 06:41:47.874428 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:47.874434 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:47.874495 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:47.908568 1639474 cri.go:89] found id: ""
	I1216 06:41:47.908604 1639474 logs.go:282] 0 containers: []
	W1216 06:41:47.908611 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:47.908617 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:47.908685 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:47.942423 1639474 cri.go:89] found id: ""
	I1216 06:41:47.942438 1639474 logs.go:282] 0 containers: []
	W1216 06:41:47.942445 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:47.942450 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:47.942518 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:47.977188 1639474 cri.go:89] found id: ""
	I1216 06:41:47.977210 1639474 logs.go:282] 0 containers: []
	W1216 06:41:47.977218 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:47.977223 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:47.977302 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:48.011589 1639474 cri.go:89] found id: ""
	I1216 06:41:48.011604 1639474 logs.go:282] 0 containers: []
	W1216 06:41:48.011623 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:48.011637 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:48.011649 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:48.090336 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:48.090357 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:48.106676 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:48.106693 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:48.174952 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:48.165517   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:48.166169   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:48.168421   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:48.169339   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:48.170443   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:48.165517   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:48.166169   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:48.168421   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:48.169339   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:48.170443   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:48.174963 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:48.174975 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:48.244365 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:48.244386 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:50.777766 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:50.790374 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:50.790436 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:50.817848 1639474 cri.go:89] found id: ""
	I1216 06:41:50.817863 1639474 logs.go:282] 0 containers: []
	W1216 06:41:50.817870 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:50.817875 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:50.817947 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:50.848261 1639474 cri.go:89] found id: ""
	I1216 06:41:50.848277 1639474 logs.go:282] 0 containers: []
	W1216 06:41:50.848285 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:50.848290 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:50.848357 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:50.875745 1639474 cri.go:89] found id: ""
	I1216 06:41:50.875771 1639474 logs.go:282] 0 containers: []
	W1216 06:41:50.875779 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:50.875784 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:50.875857 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:50.908128 1639474 cri.go:89] found id: ""
	I1216 06:41:50.908142 1639474 logs.go:282] 0 containers: []
	W1216 06:41:50.908149 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:50.908154 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:50.908216 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:50.945866 1639474 cri.go:89] found id: ""
	I1216 06:41:50.945880 1639474 logs.go:282] 0 containers: []
	W1216 06:41:50.945897 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:50.945906 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:50.945988 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:50.976758 1639474 cri.go:89] found id: ""
	I1216 06:41:50.976772 1639474 logs.go:282] 0 containers: []
	W1216 06:41:50.976779 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:50.976790 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:50.976862 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:51.012047 1639474 cri.go:89] found id: ""
	I1216 06:41:51.012061 1639474 logs.go:282] 0 containers: []
	W1216 06:41:51.012080 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:51.012088 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:51.012099 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:51.079840 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:51.079863 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:51.095967 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:51.095984 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:51.168911 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:51.158269   15168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:51.159160   15168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:51.161023   15168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:51.161808   15168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:51.163880   15168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:51.158269   15168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:51.159160   15168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:51.161023   15168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:51.161808   15168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:51.163880   15168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:51.168920 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:51.168932 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:51.241258 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:51.241281 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:53.774859 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:53.785580 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:53.785647 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:53.815910 1639474 cri.go:89] found id: ""
	I1216 06:41:53.815946 1639474 logs.go:282] 0 containers: []
	W1216 06:41:53.815954 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:53.815960 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:53.816034 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:53.843197 1639474 cri.go:89] found id: ""
	I1216 06:41:53.843220 1639474 logs.go:282] 0 containers: []
	W1216 06:41:53.843228 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:53.843233 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:53.843303 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:53.869584 1639474 cri.go:89] found id: ""
	I1216 06:41:53.869598 1639474 logs.go:282] 0 containers: []
	W1216 06:41:53.869605 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:53.869610 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:53.869672 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:53.898126 1639474 cri.go:89] found id: ""
	I1216 06:41:53.898141 1639474 logs.go:282] 0 containers: []
	W1216 06:41:53.898148 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:53.898154 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:53.898217 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:53.935008 1639474 cri.go:89] found id: ""
	I1216 06:41:53.935022 1639474 logs.go:282] 0 containers: []
	W1216 06:41:53.935029 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:53.935033 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:53.935094 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:53.971715 1639474 cri.go:89] found id: ""
	I1216 06:41:53.971729 1639474 logs.go:282] 0 containers: []
	W1216 06:41:53.971740 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:53.971745 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:53.971827 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:54.004089 1639474 cri.go:89] found id: ""
	I1216 06:41:54.004107 1639474 logs.go:282] 0 containers: []
	W1216 06:41:54.004115 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:54.004138 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:54.004151 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:54.072434 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:54.072455 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:54.088417 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:54.088436 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:54.154720 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:54.146355   15274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:54.146923   15274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:54.148518   15274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:54.149322   15274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:54.150888   15274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:54.146355   15274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:54.146923   15274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:54.148518   15274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:54.149322   15274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:54.150888   15274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:54.154730 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:54.154741 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:54.223744 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:54.223763 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:56.753558 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:56.764118 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:56.764182 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:56.789865 1639474 cri.go:89] found id: ""
	I1216 06:41:56.789879 1639474 logs.go:282] 0 containers: []
	W1216 06:41:56.789886 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:56.789891 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:56.789954 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:56.815375 1639474 cri.go:89] found id: ""
	I1216 06:41:56.815390 1639474 logs.go:282] 0 containers: []
	W1216 06:41:56.815396 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:56.815401 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:56.815458 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:56.843367 1639474 cri.go:89] found id: ""
	I1216 06:41:56.843381 1639474 logs.go:282] 0 containers: []
	W1216 06:41:56.843389 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:56.843394 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:56.843453 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:56.869235 1639474 cri.go:89] found id: ""
	I1216 06:41:56.869249 1639474 logs.go:282] 0 containers: []
	W1216 06:41:56.869263 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:56.869268 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:56.869325 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:56.894296 1639474 cri.go:89] found id: ""
	I1216 06:41:56.894310 1639474 logs.go:282] 0 containers: []
	W1216 06:41:56.894318 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:56.894323 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:56.894393 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:56.930771 1639474 cri.go:89] found id: ""
	I1216 06:41:56.930786 1639474 logs.go:282] 0 containers: []
	W1216 06:41:56.930795 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:56.930800 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:56.930877 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:56.961829 1639474 cri.go:89] found id: ""
	I1216 06:41:56.961855 1639474 logs.go:282] 0 containers: []
	W1216 06:41:56.961862 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:56.961869 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:56.961880 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:56.982515 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:56.982532 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:57.053403 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:57.042504   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:57.043169   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:57.044928   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:57.047094   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:57.047728   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:57.042504   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:57.043169   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:57.044928   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:57.047094   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:57.047728   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:57.053413 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:57.053424 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:57.122315 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:57.122338 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:57.151668 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:57.151684 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:59.721370 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:59.731285 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:59.731355 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:59.759821 1639474 cri.go:89] found id: ""
	I1216 06:41:59.759835 1639474 logs.go:282] 0 containers: []
	W1216 06:41:59.759843 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:59.759848 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:59.759905 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:59.784708 1639474 cri.go:89] found id: ""
	I1216 06:41:59.784721 1639474 logs.go:282] 0 containers: []
	W1216 06:41:59.784728 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:59.784733 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:59.784791 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:59.810181 1639474 cri.go:89] found id: ""
	I1216 06:41:59.810196 1639474 logs.go:282] 0 containers: []
	W1216 06:41:59.810204 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:59.810209 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:59.810268 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:59.836051 1639474 cri.go:89] found id: ""
	I1216 06:41:59.836072 1639474 logs.go:282] 0 containers: []
	W1216 06:41:59.836082 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:59.836094 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:59.836177 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:59.860701 1639474 cri.go:89] found id: ""
	I1216 06:41:59.860714 1639474 logs.go:282] 0 containers: []
	W1216 06:41:59.860722 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:59.860727 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:59.860786 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:59.885062 1639474 cri.go:89] found id: ""
	I1216 06:41:59.885076 1639474 logs.go:282] 0 containers: []
	W1216 06:41:59.885092 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:59.885098 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:59.885154 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:59.926044 1639474 cri.go:89] found id: ""
	I1216 06:41:59.926058 1639474 logs.go:282] 0 containers: []
	W1216 06:41:59.926065 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:59.926073 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:59.926099 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:00.037850 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:59.990877   15478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:59.991479   15478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:59.993112   15478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:59.993660   15478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:59.995321   15478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:59.990877   15478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:59.991479   15478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:59.993112   15478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:59.993660   15478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:59.995321   15478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:00.037864 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:00.037877 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:00.264777 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:00.264802 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:00.361496 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:00.361518 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:00.460153 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:00.460175 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:02.976790 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:02.987102 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:02.987180 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:03.015111 1639474 cri.go:89] found id: ""
	I1216 06:42:03.015126 1639474 logs.go:282] 0 containers: []
	W1216 06:42:03.015133 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:03.015139 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:03.015202 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:03.040871 1639474 cri.go:89] found id: ""
	I1216 06:42:03.040903 1639474 logs.go:282] 0 containers: []
	W1216 06:42:03.040910 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:03.040915 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:03.040977 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:03.065726 1639474 cri.go:89] found id: ""
	I1216 06:42:03.065740 1639474 logs.go:282] 0 containers: []
	W1216 06:42:03.065748 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:03.065754 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:03.065813 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:03.090951 1639474 cri.go:89] found id: ""
	I1216 06:42:03.090966 1639474 logs.go:282] 0 containers: []
	W1216 06:42:03.090973 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:03.090979 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:03.091037 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:03.119521 1639474 cri.go:89] found id: ""
	I1216 06:42:03.119536 1639474 logs.go:282] 0 containers: []
	W1216 06:42:03.119543 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:03.119549 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:03.119615 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:03.147166 1639474 cri.go:89] found id: ""
	I1216 06:42:03.147181 1639474 logs.go:282] 0 containers: []
	W1216 06:42:03.147188 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:03.147193 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:03.147267 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:03.172021 1639474 cri.go:89] found id: ""
	I1216 06:42:03.172035 1639474 logs.go:282] 0 containers: []
	W1216 06:42:03.172042 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:03.172050 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:03.172060 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:03.186822 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:03.186838 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:03.250765 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:03.242422   15588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:03.243046   15588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:03.244675   15588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:03.245279   15588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:03.246834   15588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:03.242422   15588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:03.243046   15588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:03.244675   15588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:03.245279   15588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:03.246834   15588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:03.250775 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:03.250786 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:03.325562 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:03.325590 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:03.355074 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:03.355093 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:05.922524 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:05.932734 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:05.932804 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:05.960790 1639474 cri.go:89] found id: ""
	I1216 06:42:05.960804 1639474 logs.go:282] 0 containers: []
	W1216 06:42:05.960811 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:05.960816 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:05.960884 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:05.986356 1639474 cri.go:89] found id: ""
	I1216 06:42:05.986386 1639474 logs.go:282] 0 containers: []
	W1216 06:42:05.986394 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:05.986399 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:05.986458 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:06.015030 1639474 cri.go:89] found id: ""
	I1216 06:42:06.015046 1639474 logs.go:282] 0 containers: []
	W1216 06:42:06.015053 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:06.015058 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:06.015119 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:06.041009 1639474 cri.go:89] found id: ""
	I1216 06:42:06.041023 1639474 logs.go:282] 0 containers: []
	W1216 06:42:06.041030 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:06.041035 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:06.041091 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:06.068292 1639474 cri.go:89] found id: ""
	I1216 06:42:06.068306 1639474 logs.go:282] 0 containers: []
	W1216 06:42:06.068314 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:06.068319 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:06.068375 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:06.100555 1639474 cri.go:89] found id: ""
	I1216 06:42:06.100569 1639474 logs.go:282] 0 containers: []
	W1216 06:42:06.100576 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:06.100582 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:06.100642 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:06.132353 1639474 cri.go:89] found id: ""
	I1216 06:42:06.132367 1639474 logs.go:282] 0 containers: []
	W1216 06:42:06.132374 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:06.132382 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:06.132392 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:06.201249 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:06.192521   15689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:06.193141   15689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:06.194767   15689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:06.195329   15689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:06.197078   15689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:06.192521   15689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:06.193141   15689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:06.194767   15689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:06.195329   15689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:06.197078   15689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:06.201259 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:06.201271 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:06.271083 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:06.271102 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:06.300840 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:06.300857 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:06.369023 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:06.369043 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:08.885532 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:08.897655 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:08.897714 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:08.929123 1639474 cri.go:89] found id: ""
	I1216 06:42:08.929137 1639474 logs.go:282] 0 containers: []
	W1216 06:42:08.929144 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:08.929149 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:08.929216 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:08.969020 1639474 cri.go:89] found id: ""
	I1216 06:42:08.969036 1639474 logs.go:282] 0 containers: []
	W1216 06:42:08.969043 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:08.969049 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:08.969107 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:08.995554 1639474 cri.go:89] found id: ""
	I1216 06:42:08.995569 1639474 logs.go:282] 0 containers: []
	W1216 06:42:08.995577 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:08.995582 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:08.995642 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:09.023705 1639474 cri.go:89] found id: ""
	I1216 06:42:09.023720 1639474 logs.go:282] 0 containers: []
	W1216 06:42:09.023727 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:09.023732 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:09.023795 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:09.050243 1639474 cri.go:89] found id: ""
	I1216 06:42:09.050263 1639474 logs.go:282] 0 containers: []
	W1216 06:42:09.050270 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:09.050275 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:09.050332 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:09.075763 1639474 cri.go:89] found id: ""
	I1216 06:42:09.075778 1639474 logs.go:282] 0 containers: []
	W1216 06:42:09.075786 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:09.075791 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:09.075847 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:09.102027 1639474 cri.go:89] found id: ""
	I1216 06:42:09.102042 1639474 logs.go:282] 0 containers: []
	W1216 06:42:09.102050 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:09.102058 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:09.102072 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:09.131304 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:09.131322 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:09.197595 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:09.197616 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:09.214311 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:09.214329 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:09.280261 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:09.272137   15812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:09.272954   15812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:09.274571   15812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:09.274879   15812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:09.276370   15812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:09.272137   15812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:09.272954   15812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:09.274571   15812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:09.274879   15812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:09.276370   15812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:09.280272 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:09.280287 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:11.849647 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:11.859759 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:11.859820 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:11.885934 1639474 cri.go:89] found id: ""
	I1216 06:42:11.885948 1639474 logs.go:282] 0 containers: []
	W1216 06:42:11.885955 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:11.885960 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:11.886024 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:11.915333 1639474 cri.go:89] found id: ""
	I1216 06:42:11.915347 1639474 logs.go:282] 0 containers: []
	W1216 06:42:11.915354 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:11.915359 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:11.915420 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:11.958797 1639474 cri.go:89] found id: ""
	I1216 06:42:11.958811 1639474 logs.go:282] 0 containers: []
	W1216 06:42:11.958818 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:11.958823 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:11.958882 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:11.986843 1639474 cri.go:89] found id: ""
	I1216 06:42:11.986858 1639474 logs.go:282] 0 containers: []
	W1216 06:42:11.986865 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:11.986870 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:11.986928 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:12.016252 1639474 cri.go:89] found id: ""
	I1216 06:42:12.016268 1639474 logs.go:282] 0 containers: []
	W1216 06:42:12.016275 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:12.016280 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:12.016340 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:12.047250 1639474 cri.go:89] found id: ""
	I1216 06:42:12.047264 1639474 logs.go:282] 0 containers: []
	W1216 06:42:12.047271 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:12.047276 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:12.047334 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:12.073692 1639474 cri.go:89] found id: ""
	I1216 06:42:12.073706 1639474 logs.go:282] 0 containers: []
	W1216 06:42:12.073713 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:12.073721 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:12.073732 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:12.137759 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:12.129267   15900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:12.129895   15900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:12.131416   15900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:12.131890   15900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:12.133511   15900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:12.129267   15900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:12.129895   15900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:12.131416   15900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:12.131890   15900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:12.133511   15900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:12.137769 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:12.137780 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:12.206794 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:12.206815 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:12.235894 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:12.235910 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:12.304248 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:12.304267 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:14.819229 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:14.829519 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:14.829579 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:14.854644 1639474 cri.go:89] found id: ""
	I1216 06:42:14.854658 1639474 logs.go:282] 0 containers: []
	W1216 06:42:14.854665 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:14.854670 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:14.854744 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:14.879759 1639474 cri.go:89] found id: ""
	I1216 06:42:14.879774 1639474 logs.go:282] 0 containers: []
	W1216 06:42:14.879781 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:14.879785 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:14.879846 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:14.914620 1639474 cri.go:89] found id: ""
	I1216 06:42:14.914633 1639474 logs.go:282] 0 containers: []
	W1216 06:42:14.914640 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:14.914645 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:14.914706 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:14.949457 1639474 cri.go:89] found id: ""
	I1216 06:42:14.949470 1639474 logs.go:282] 0 containers: []
	W1216 06:42:14.949477 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:14.949482 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:14.949539 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:14.978393 1639474 cri.go:89] found id: ""
	I1216 06:42:14.978407 1639474 logs.go:282] 0 containers: []
	W1216 06:42:14.978414 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:14.978419 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:14.978485 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:15.059438 1639474 cri.go:89] found id: ""
	I1216 06:42:15.059454 1639474 logs.go:282] 0 containers: []
	W1216 06:42:15.059468 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:15.059474 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:15.059560 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:15.087173 1639474 cri.go:89] found id: ""
	I1216 06:42:15.087188 1639474 logs.go:282] 0 containers: []
	W1216 06:42:15.087194 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:15.087202 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:15.087212 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:15.157589 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:15.157610 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:15.187757 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:15.187774 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:15.256722 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:15.256742 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:15.271447 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:15.271464 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:15.332113 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:15.323890   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:15.324640   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:15.325693   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:15.326204   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:15.327840   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:15.323890   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:15.324640   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:15.325693   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:15.326204   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:15.327840   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:17.832401 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:17.842950 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:17.843012 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:17.871468 1639474 cri.go:89] found id: ""
	I1216 06:42:17.871483 1639474 logs.go:282] 0 containers: []
	W1216 06:42:17.871490 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:17.871496 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:17.871554 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:17.904274 1639474 cri.go:89] found id: ""
	I1216 06:42:17.904288 1639474 logs.go:282] 0 containers: []
	W1216 06:42:17.904295 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:17.904299 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:17.904355 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:17.936320 1639474 cri.go:89] found id: ""
	I1216 06:42:17.936334 1639474 logs.go:282] 0 containers: []
	W1216 06:42:17.936341 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:17.936346 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:17.936403 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:17.967750 1639474 cri.go:89] found id: ""
	I1216 06:42:17.967764 1639474 logs.go:282] 0 containers: []
	W1216 06:42:17.967771 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:17.967775 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:17.967833 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:17.993994 1639474 cri.go:89] found id: ""
	I1216 06:42:17.994008 1639474 logs.go:282] 0 containers: []
	W1216 06:42:17.994016 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:17.994021 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:17.994085 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:18.021367 1639474 cri.go:89] found id: ""
	I1216 06:42:18.021382 1639474 logs.go:282] 0 containers: []
	W1216 06:42:18.021390 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:18.021395 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:18.021463 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:18.052100 1639474 cri.go:89] found id: ""
	I1216 06:42:18.052115 1639474 logs.go:282] 0 containers: []
	W1216 06:42:18.052122 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:18.052130 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:18.052141 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:18.117261 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:18.117282 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:18.132219 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:18.132235 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:18.198118 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:18.189377   16116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:18.189937   16116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:18.191659   16116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:18.192181   16116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:18.193769   16116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:18.189377   16116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:18.189937   16116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:18.191659   16116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:18.192181   16116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:18.193769   16116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:18.198128 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:18.198139 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:18.265118 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:18.265138 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:20.794027 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:20.803718 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:20.803782 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:20.828191 1639474 cri.go:89] found id: ""
	I1216 06:42:20.828205 1639474 logs.go:282] 0 containers: []
	W1216 06:42:20.828212 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:20.828217 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:20.828278 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:20.853411 1639474 cri.go:89] found id: ""
	I1216 06:42:20.853425 1639474 logs.go:282] 0 containers: []
	W1216 06:42:20.853432 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:20.853437 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:20.853499 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:20.877825 1639474 cri.go:89] found id: ""
	I1216 06:42:20.877841 1639474 logs.go:282] 0 containers: []
	W1216 06:42:20.877848 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:20.877853 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:20.877908 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:20.910891 1639474 cri.go:89] found id: ""
	I1216 06:42:20.910904 1639474 logs.go:282] 0 containers: []
	W1216 06:42:20.910911 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:20.910916 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:20.910973 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:20.941025 1639474 cri.go:89] found id: ""
	I1216 06:42:20.941039 1639474 logs.go:282] 0 containers: []
	W1216 06:42:20.941045 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:20.941050 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:20.941108 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:20.973633 1639474 cri.go:89] found id: ""
	I1216 06:42:20.973647 1639474 logs.go:282] 0 containers: []
	W1216 06:42:20.973654 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:20.973659 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:20.973714 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:21.002805 1639474 cri.go:89] found id: ""
	I1216 06:42:21.002821 1639474 logs.go:282] 0 containers: []
	W1216 06:42:21.002828 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:21.002837 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:21.002849 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:21.068941 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:21.068961 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:21.083829 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:21.083853 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:21.147337 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:21.139664   16218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:21.140092   16218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:21.141644   16218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:21.141963   16218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:21.143475   16218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:21.139664   16218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:21.140092   16218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:21.141644   16218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:21.141963   16218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:21.143475   16218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:21.147347 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:21.147359 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:21.215583 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:21.215604 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:23.745376 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:23.755709 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:23.755771 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:23.781141 1639474 cri.go:89] found id: ""
	I1216 06:42:23.781155 1639474 logs.go:282] 0 containers: []
	W1216 06:42:23.781162 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:23.781168 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:23.781234 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:23.811661 1639474 cri.go:89] found id: ""
	I1216 06:42:23.811675 1639474 logs.go:282] 0 containers: []
	W1216 06:42:23.811683 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:23.811687 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:23.811745 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:23.837608 1639474 cri.go:89] found id: ""
	I1216 06:42:23.837623 1639474 logs.go:282] 0 containers: []
	W1216 06:42:23.837630 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:23.837635 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:23.837694 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:23.864015 1639474 cri.go:89] found id: ""
	I1216 06:42:23.864041 1639474 logs.go:282] 0 containers: []
	W1216 06:42:23.864051 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:23.864057 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:23.864124 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:23.889789 1639474 cri.go:89] found id: ""
	I1216 06:42:23.889806 1639474 logs.go:282] 0 containers: []
	W1216 06:42:23.889813 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:23.889818 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:23.889877 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:23.918576 1639474 cri.go:89] found id: ""
	I1216 06:42:23.918590 1639474 logs.go:282] 0 containers: []
	W1216 06:42:23.918598 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:23.918603 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:23.918661 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:23.950516 1639474 cri.go:89] found id: ""
	I1216 06:42:23.950531 1639474 logs.go:282] 0 containers: []
	W1216 06:42:23.950537 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:23.950545 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:23.950555 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:23.980911 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:23.980928 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:24.047333 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:24.047355 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:24.063020 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:24.063037 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:24.131565 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:24.123164   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:24.124006   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:24.125798   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:24.126123   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:24.127396   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:24.123164   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:24.124006   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:24.125798   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:24.126123   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:24.127396   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:24.131574 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:24.131593 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:26.704797 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:26.715064 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:26.715144 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:26.741016 1639474 cri.go:89] found id: ""
	I1216 06:42:26.741030 1639474 logs.go:282] 0 containers: []
	W1216 06:42:26.741037 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:26.741043 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:26.741102 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:26.771178 1639474 cri.go:89] found id: ""
	I1216 06:42:26.771192 1639474 logs.go:282] 0 containers: []
	W1216 06:42:26.771200 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:26.771205 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:26.771263 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:26.796426 1639474 cri.go:89] found id: ""
	I1216 06:42:26.796440 1639474 logs.go:282] 0 containers: []
	W1216 06:42:26.796447 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:26.796452 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:26.796530 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:26.822428 1639474 cri.go:89] found id: ""
	I1216 06:42:26.822444 1639474 logs.go:282] 0 containers: []
	W1216 06:42:26.822451 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:26.822456 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:26.822512 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:26.855530 1639474 cri.go:89] found id: ""
	I1216 06:42:26.855545 1639474 logs.go:282] 0 containers: []
	W1216 06:42:26.855552 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:26.855557 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:26.855617 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:26.880135 1639474 cri.go:89] found id: ""
	I1216 06:42:26.880149 1639474 logs.go:282] 0 containers: []
	W1216 06:42:26.880156 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:26.880161 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:26.880219 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:26.917307 1639474 cri.go:89] found id: ""
	I1216 06:42:26.917321 1639474 logs.go:282] 0 containers: []
	W1216 06:42:26.917327 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:26.917335 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:26.917347 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:26.997666 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:26.997690 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:27.033638 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:27.033662 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:27.104861 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:27.104880 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:27.119683 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:27.119699 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:27.187945 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:27.180063   16436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:27.180759   16436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:27.182494   16436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:27.183036   16436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:27.184032   16436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:27.180063   16436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:27.180759   16436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:27.182494   16436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:27.183036   16436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:27.184032   16436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:29.688270 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:29.698566 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:29.698629 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:29.724518 1639474 cri.go:89] found id: ""
	I1216 06:42:29.724532 1639474 logs.go:282] 0 containers: []
	W1216 06:42:29.724539 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:29.724544 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:29.724605 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:29.749436 1639474 cri.go:89] found id: ""
	I1216 06:42:29.749451 1639474 logs.go:282] 0 containers: []
	W1216 06:42:29.749458 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:29.749463 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:29.749525 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:29.774261 1639474 cri.go:89] found id: ""
	I1216 06:42:29.774276 1639474 logs.go:282] 0 containers: []
	W1216 06:42:29.774283 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:29.774290 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:29.774349 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:29.799455 1639474 cri.go:89] found id: ""
	I1216 06:42:29.799469 1639474 logs.go:282] 0 containers: []
	W1216 06:42:29.799478 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:29.799483 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:29.799541 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:29.823692 1639474 cri.go:89] found id: ""
	I1216 06:42:29.823707 1639474 logs.go:282] 0 containers: []
	W1216 06:42:29.823714 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:29.823718 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:29.823784 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:29.851131 1639474 cri.go:89] found id: ""
	I1216 06:42:29.851156 1639474 logs.go:282] 0 containers: []
	W1216 06:42:29.851164 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:29.851169 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:29.851239 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:29.875892 1639474 cri.go:89] found id: ""
	I1216 06:42:29.875906 1639474 logs.go:282] 0 containers: []
	W1216 06:42:29.875923 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:29.875931 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:29.875942 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:29.949752 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:29.949772 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:29.966843 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:29.966860 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:30.075177 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:30.040929   16528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:30.058446   16528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:30.059998   16528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:30.060413   16528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:30.066181   16528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:30.040929   16528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:30.058446   16528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:30.059998   16528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:30.060413   16528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:30.066181   16528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:30.075189 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:30.075201 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:30.153503 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:30.153525 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:32.683959 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:32.695552 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:32.695611 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:32.719250 1639474 cri.go:89] found id: ""
	I1216 06:42:32.719264 1639474 logs.go:282] 0 containers: []
	W1216 06:42:32.719271 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:32.719276 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:32.719335 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:32.744437 1639474 cri.go:89] found id: ""
	I1216 06:42:32.744451 1639474 logs.go:282] 0 containers: []
	W1216 06:42:32.744459 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:32.744464 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:32.744568 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:32.772181 1639474 cri.go:89] found id: ""
	I1216 06:42:32.772196 1639474 logs.go:282] 0 containers: []
	W1216 06:42:32.772204 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:32.772209 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:32.772273 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:32.799021 1639474 cri.go:89] found id: ""
	I1216 06:42:32.799035 1639474 logs.go:282] 0 containers: []
	W1216 06:42:32.799041 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:32.799046 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:32.799103 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:32.826452 1639474 cri.go:89] found id: ""
	I1216 06:42:32.826466 1639474 logs.go:282] 0 containers: []
	W1216 06:42:32.826473 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:32.826478 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:32.826535 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:32.854867 1639474 cri.go:89] found id: ""
	I1216 06:42:32.854881 1639474 logs.go:282] 0 containers: []
	W1216 06:42:32.854888 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:32.854893 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:32.854953 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:32.883584 1639474 cri.go:89] found id: ""
	I1216 06:42:32.883608 1639474 logs.go:282] 0 containers: []
	W1216 06:42:32.883615 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:32.883624 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:32.883635 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:32.969443 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:32.969472 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:33.000330 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:33.000354 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:33.068289 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:33.068311 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:33.083127 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:33.083145 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:33.154304 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:33.145404   16644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:33.146150   16644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:33.147831   16644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:33.148385   16644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:33.150308   16644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:33.145404   16644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:33.146150   16644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:33.147831   16644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:33.148385   16644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:33.150308   16644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:35.655139 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:35.665534 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:35.665616 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:35.691995 1639474 cri.go:89] found id: ""
	I1216 06:42:35.692009 1639474 logs.go:282] 0 containers: []
	W1216 06:42:35.692016 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:35.692021 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:35.692079 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:35.718728 1639474 cri.go:89] found id: ""
	I1216 06:42:35.718742 1639474 logs.go:282] 0 containers: []
	W1216 06:42:35.718748 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:35.718753 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:35.718812 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:35.743314 1639474 cri.go:89] found id: ""
	I1216 06:42:35.743328 1639474 logs.go:282] 0 containers: []
	W1216 06:42:35.743334 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:35.743339 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:35.743400 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:35.767871 1639474 cri.go:89] found id: ""
	I1216 06:42:35.767885 1639474 logs.go:282] 0 containers: []
	W1216 06:42:35.767893 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:35.767897 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:35.767958 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:35.791769 1639474 cri.go:89] found id: ""
	I1216 06:42:35.791783 1639474 logs.go:282] 0 containers: []
	W1216 06:42:35.791790 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:35.791795 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:35.791854 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:35.819002 1639474 cri.go:89] found id: ""
	I1216 06:42:35.819016 1639474 logs.go:282] 0 containers: []
	W1216 06:42:35.819023 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:35.819028 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:35.819083 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:35.843378 1639474 cri.go:89] found id: ""
	I1216 06:42:35.843392 1639474 logs.go:282] 0 containers: []
	W1216 06:42:35.843399 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:35.843407 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:35.843417 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:35.912874 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:35.912893 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:35.930936 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:35.930952 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:36.006314 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:35.994455   16736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:35.995222   16736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:35.996889   16736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:35.997269   16736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:35.999528   16736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:35.994455   16736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:35.995222   16736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:35.996889   16736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:35.997269   16736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:35.999528   16736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:36.006326 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:36.006338 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:36.080077 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:36.080099 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:38.612139 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:38.622353 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:38.622412 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:38.648583 1639474 cri.go:89] found id: ""
	I1216 06:42:38.648597 1639474 logs.go:282] 0 containers: []
	W1216 06:42:38.648604 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:38.648613 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:38.648671 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:38.674035 1639474 cri.go:89] found id: ""
	I1216 06:42:38.674049 1639474 logs.go:282] 0 containers: []
	W1216 06:42:38.674056 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:38.674061 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:38.674119 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:38.699213 1639474 cri.go:89] found id: ""
	I1216 06:42:38.699228 1639474 logs.go:282] 0 containers: []
	W1216 06:42:38.699234 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:38.699239 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:38.699294 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:38.723415 1639474 cri.go:89] found id: ""
	I1216 06:42:38.723429 1639474 logs.go:282] 0 containers: []
	W1216 06:42:38.723436 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:38.723441 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:38.723499 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:38.751059 1639474 cri.go:89] found id: ""
	I1216 06:42:38.751074 1639474 logs.go:282] 0 containers: []
	W1216 06:42:38.751081 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:38.751086 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:38.751146 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:38.779542 1639474 cri.go:89] found id: ""
	I1216 06:42:38.779557 1639474 logs.go:282] 0 containers: []
	W1216 06:42:38.779584 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:38.779589 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:38.779660 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:38.813466 1639474 cri.go:89] found id: ""
	I1216 06:42:38.813480 1639474 logs.go:282] 0 containers: []
	W1216 06:42:38.813488 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:38.813496 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:38.813507 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:38.842140 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:38.842158 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:38.908007 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:38.908027 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:38.923600 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:38.923618 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:38.995488 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:38.986888   16852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:38.987379   16852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:38.988908   16852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:38.989502   16852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:38.991340   16852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:38.986888   16852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:38.987379   16852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:38.988908   16852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:38.989502   16852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:38.991340   16852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:38.995498 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:38.995509 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:41.565694 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:41.575799 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:41.575860 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:41.600796 1639474 cri.go:89] found id: ""
	I1216 06:42:41.600811 1639474 logs.go:282] 0 containers: []
	W1216 06:42:41.600817 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:41.600822 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:41.600879 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:41.625792 1639474 cri.go:89] found id: ""
	I1216 06:42:41.625807 1639474 logs.go:282] 0 containers: []
	W1216 06:42:41.625814 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:41.625818 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:41.625875 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:41.650243 1639474 cri.go:89] found id: ""
	I1216 06:42:41.650257 1639474 logs.go:282] 0 containers: []
	W1216 06:42:41.650264 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:41.650269 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:41.650328 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:41.675889 1639474 cri.go:89] found id: ""
	I1216 06:42:41.675915 1639474 logs.go:282] 0 containers: []
	W1216 06:42:41.675923 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:41.675928 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:41.675993 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:41.703050 1639474 cri.go:89] found id: ""
	I1216 06:42:41.703064 1639474 logs.go:282] 0 containers: []
	W1216 06:42:41.703082 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:41.703088 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:41.703146 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:41.729269 1639474 cri.go:89] found id: ""
	I1216 06:42:41.729283 1639474 logs.go:282] 0 containers: []
	W1216 06:42:41.729293 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:41.729299 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:41.729369 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:41.753781 1639474 cri.go:89] found id: ""
	I1216 06:42:41.753796 1639474 logs.go:282] 0 containers: []
	W1216 06:42:41.753803 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:41.753811 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:41.753821 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:41.783522 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:41.783538 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:41.848274 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:41.848295 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:41.863600 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:41.863618 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:41.936160 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:41.927245   16955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:41.928139   16955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:41.929727   16955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:41.930266   16955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:41.931845   16955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:41.927245   16955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:41.928139   16955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:41.929727   16955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:41.930266   16955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:41.931845   16955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:41.936170 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:41.936181 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:44.511341 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:44.521587 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:44.521648 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:44.547007 1639474 cri.go:89] found id: ""
	I1216 06:42:44.547021 1639474 logs.go:282] 0 containers: []
	W1216 06:42:44.547028 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:44.547033 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:44.547096 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:44.572902 1639474 cri.go:89] found id: ""
	I1216 06:42:44.572917 1639474 logs.go:282] 0 containers: []
	W1216 06:42:44.572924 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:44.572928 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:44.572995 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:44.598645 1639474 cri.go:89] found id: ""
	I1216 06:42:44.598659 1639474 logs.go:282] 0 containers: []
	W1216 06:42:44.598667 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:44.598672 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:44.598731 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:44.627366 1639474 cri.go:89] found id: ""
	I1216 06:42:44.627381 1639474 logs.go:282] 0 containers: []
	W1216 06:42:44.627388 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:44.627396 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:44.627452 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:44.654294 1639474 cri.go:89] found id: ""
	I1216 06:42:44.654309 1639474 logs.go:282] 0 containers: []
	W1216 06:42:44.654319 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:44.654324 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:44.654382 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:44.679363 1639474 cri.go:89] found id: ""
	I1216 06:42:44.679378 1639474 logs.go:282] 0 containers: []
	W1216 06:42:44.679385 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:44.679392 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:44.679452 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:44.714760 1639474 cri.go:89] found id: ""
	I1216 06:42:44.714775 1639474 logs.go:282] 0 containers: []
	W1216 06:42:44.714781 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:44.714789 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:44.714800 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:44.779035 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:44.779055 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:44.793727 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:44.793745 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:44.860570 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:44.851694   17051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:44.852237   17051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:44.853933   17051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:44.854480   17051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:44.856105   17051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:44.851694   17051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:44.852237   17051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:44.853933   17051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:44.854480   17051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:44.856105   17051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:44.860581 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:44.860594 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:44.934290 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:44.934310 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:47.465385 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:47.475377 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:47.475436 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:47.503015 1639474 cri.go:89] found id: ""
	I1216 06:42:47.503042 1639474 logs.go:282] 0 containers: []
	W1216 06:42:47.503049 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:47.503055 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:47.503136 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:47.528903 1639474 cri.go:89] found id: ""
	I1216 06:42:47.528917 1639474 logs.go:282] 0 containers: []
	W1216 06:42:47.528924 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:47.528929 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:47.528989 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:47.554766 1639474 cri.go:89] found id: ""
	I1216 06:42:47.554781 1639474 logs.go:282] 0 containers: []
	W1216 06:42:47.554788 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:47.554792 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:47.554858 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:47.585092 1639474 cri.go:89] found id: ""
	I1216 06:42:47.585106 1639474 logs.go:282] 0 containers: []
	W1216 06:42:47.585113 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:47.585118 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:47.585214 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:47.610493 1639474 cri.go:89] found id: ""
	I1216 06:42:47.610508 1639474 logs.go:282] 0 containers: []
	W1216 06:42:47.610514 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:47.610519 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:47.610577 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:47.635340 1639474 cri.go:89] found id: ""
	I1216 06:42:47.635354 1639474 logs.go:282] 0 containers: []
	W1216 06:42:47.635361 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:47.635365 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:47.635424 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:47.661321 1639474 cri.go:89] found id: ""
	I1216 06:42:47.661335 1639474 logs.go:282] 0 containers: []
	W1216 06:42:47.661342 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:47.661349 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:47.661360 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:47.726879 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:47.726898 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:47.741659 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:47.741684 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:47.804784 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:47.796440   17154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:47.797188   17154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:47.798787   17154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:47.799294   17154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:47.800945   17154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:47.796440   17154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:47.797188   17154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:47.798787   17154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:47.799294   17154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:47.800945   17154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:47.804795 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:47.804807 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:47.871075 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:47.871096 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:50.410207 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:50.419946 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:50.420007 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:50.446668 1639474 cri.go:89] found id: ""
	I1216 06:42:50.446683 1639474 logs.go:282] 0 containers: []
	W1216 06:42:50.446689 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:50.446694 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:50.446753 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:50.471089 1639474 cri.go:89] found id: ""
	I1216 06:42:50.471119 1639474 logs.go:282] 0 containers: []
	W1216 06:42:50.471126 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:50.471131 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:50.471201 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:50.496821 1639474 cri.go:89] found id: ""
	I1216 06:42:50.496836 1639474 logs.go:282] 0 containers: []
	W1216 06:42:50.496843 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:50.496848 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:50.496906 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:50.522621 1639474 cri.go:89] found id: ""
	I1216 06:42:50.522647 1639474 logs.go:282] 0 containers: []
	W1216 06:42:50.522655 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:50.522660 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:50.522720 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:50.547813 1639474 cri.go:89] found id: ""
	I1216 06:42:50.547828 1639474 logs.go:282] 0 containers: []
	W1216 06:42:50.547847 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:50.547858 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:50.547926 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:50.573695 1639474 cri.go:89] found id: ""
	I1216 06:42:50.573709 1639474 logs.go:282] 0 containers: []
	W1216 06:42:50.573716 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:50.573734 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:50.573791 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:50.597701 1639474 cri.go:89] found id: ""
	I1216 06:42:50.597728 1639474 logs.go:282] 0 containers: []
	W1216 06:42:50.597735 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:50.597743 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:50.597754 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:50.634166 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:50.634183 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:50.700131 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:50.700152 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:50.714678 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:50.714694 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:50.782436 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:50.773358   17266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:50.773772   17266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:50.775550   17266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:50.775862   17266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:50.778084   17266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:50.773358   17266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:50.773772   17266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:50.775550   17266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:50.775862   17266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:50.778084   17266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:50.782446 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:50.782457 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:53.352592 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:53.362386 1639474 kubeadm.go:602] duration metric: took 4m3.23343297s to restartPrimaryControlPlane
	W1216 06:42:53.362440 1639474 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 06:42:53.362522 1639474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 06:42:53.770157 1639474 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 06:42:53.783560 1639474 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 06:42:53.791651 1639474 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 06:42:53.791714 1639474 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 06:42:53.800044 1639474 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 06:42:53.800054 1639474 kubeadm.go:158] found existing configuration files:
	
	I1216 06:42:53.800109 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 06:42:53.808053 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 06:42:53.808117 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 06:42:53.815698 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 06:42:53.823700 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 06:42:53.823760 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 06:42:53.831721 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 06:42:53.840020 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 06:42:53.840081 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 06:42:53.848003 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 06:42:53.856083 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 06:42:53.856151 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 06:42:53.863882 1639474 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 06:42:53.905755 1639474 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 06:42:53.905814 1639474 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:42:53.975149 1639474 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 06:42:53.975215 1639474 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 06:42:53.975250 1639474 kubeadm.go:319] OS: Linux
	I1216 06:42:53.975294 1639474 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 06:42:53.975341 1639474 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 06:42:53.975388 1639474 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 06:42:53.975435 1639474 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 06:42:53.975482 1639474 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 06:42:53.975528 1639474 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 06:42:53.975572 1639474 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 06:42:53.975619 1639474 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 06:42:53.975663 1639474 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 06:42:54.043340 1639474 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:42:54.043458 1639474 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:42:54.043554 1639474 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:42:54.051413 1639474 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:42:54.053411 1639474 out.go:252]   - Generating certificates and keys ...
	I1216 06:42:54.053534 1639474 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:42:54.053635 1639474 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:42:54.053726 1639474 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 06:42:54.053790 1639474 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 06:42:54.053864 1639474 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 06:42:54.053921 1639474 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 06:42:54.054179 1639474 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 06:42:54.054243 1639474 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 06:42:54.054338 1639474 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 06:42:54.054707 1639474 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 06:42:54.054967 1639474 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 06:42:54.055037 1639474 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:42:54.157358 1639474 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:42:54.374409 1639474 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:42:54.451048 1639474 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:42:54.729890 1639474 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:42:55.123905 1639474 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:42:55.124705 1639474 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:42:55.129362 1639474 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:42:55.130938 1639474 out.go:252]   - Booting up control plane ...
	I1216 06:42:55.131069 1639474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:42:55.131195 1639474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:42:55.132057 1639474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:42:55.147012 1639474 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:42:55.147116 1639474 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:42:55.155648 1639474 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:42:55.155999 1639474 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:42:55.156106 1639474 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:42:55.287137 1639474 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:42:55.287251 1639474 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:46:55.288217 1639474 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001159637s
	I1216 06:46:55.288243 1639474 kubeadm.go:319] 
	I1216 06:46:55.288304 1639474 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 06:46:55.288336 1639474 kubeadm.go:319] 	- The kubelet is not running
	I1216 06:46:55.288440 1639474 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 06:46:55.288445 1639474 kubeadm.go:319] 
	I1216 06:46:55.288565 1639474 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 06:46:55.288597 1639474 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 06:46:55.288627 1639474 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 06:46:55.288630 1639474 kubeadm.go:319] 
	I1216 06:46:55.292707 1639474 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 06:46:55.293173 1639474 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 06:46:55.293300 1639474 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 06:46:55.293545 1639474 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 06:46:55.293552 1639474 kubeadm.go:319] 
	I1216 06:46:55.293641 1639474 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1216 06:46:55.293765 1639474 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001159637s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 06:46:55.293855 1639474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 06:46:55.704413 1639474 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 06:46:55.717800 1639474 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 06:46:55.717860 1639474 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 06:46:55.726221 1639474 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 06:46:55.726230 1639474 kubeadm.go:158] found existing configuration files:
	
	I1216 06:46:55.726283 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 06:46:55.734520 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 06:46:55.734578 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 06:46:55.742443 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 06:46:55.750333 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 06:46:55.750396 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 06:46:55.758306 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 06:46:55.766326 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 06:46:55.766405 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 06:46:55.774041 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 06:46:55.782003 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 06:46:55.782061 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 06:46:55.789651 1639474 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 06:46:55.828645 1639474 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 06:46:55.828882 1639474 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:46:55.903247 1639474 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 06:46:55.903309 1639474 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 06:46:55.903344 1639474 kubeadm.go:319] OS: Linux
	I1216 06:46:55.903387 1639474 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 06:46:55.903435 1639474 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 06:46:55.903481 1639474 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 06:46:55.903528 1639474 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 06:46:55.903575 1639474 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 06:46:55.903627 1639474 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 06:46:55.903672 1639474 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 06:46:55.903719 1639474 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 06:46:55.903764 1639474 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 06:46:55.978404 1639474 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:46:55.978523 1639474 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:46:55.978635 1639474 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:46:55.988968 1639474 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:46:55.992562 1639474 out.go:252]   - Generating certificates and keys ...
	I1216 06:46:55.992651 1639474 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:46:55.992728 1639474 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:46:55.992809 1639474 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 06:46:55.992874 1639474 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 06:46:55.992948 1639474 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 06:46:55.993006 1639474 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 06:46:55.993073 1639474 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 06:46:55.993138 1639474 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 06:46:55.993217 1639474 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 06:46:55.993295 1639474 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 06:46:55.993334 1639474 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 06:46:55.993394 1639474 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:46:56.216895 1639474 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:46:56.479326 1639474 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:46:56.885081 1639474 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:46:57.284813 1639474 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:46:57.705019 1639474 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:46:57.705808 1639474 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:46:57.708929 1639474 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:46:57.712185 1639474 out.go:252]   - Booting up control plane ...
	I1216 06:46:57.712286 1639474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:46:57.712364 1639474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:46:57.713358 1639474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:46:57.728440 1639474 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:46:57.729026 1639474 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:46:57.736761 1639474 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:46:57.737279 1639474 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:46:57.737495 1639474 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:46:57.864121 1639474 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:46:57.864234 1639474 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:50:57.863911 1639474 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000152952s
	I1216 06:50:57.863934 1639474 kubeadm.go:319] 
	I1216 06:50:57.863990 1639474 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 06:50:57.864023 1639474 kubeadm.go:319] 	- The kubelet is not running
	I1216 06:50:57.864128 1639474 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 06:50:57.864133 1639474 kubeadm.go:319] 
	I1216 06:50:57.864236 1639474 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 06:50:57.864267 1639474 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 06:50:57.864298 1639474 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 06:50:57.864301 1639474 kubeadm.go:319] 
	I1216 06:50:57.868420 1639474 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 06:50:57.868920 1639474 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 06:50:57.869030 1639474 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 06:50:57.869291 1639474 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1216 06:50:57.869296 1639474 kubeadm.go:319] 
	I1216 06:50:57.869364 1639474 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 06:50:57.869421 1639474 kubeadm.go:403] duration metric: took 12m7.776167752s to StartCluster
	I1216 06:50:57.869453 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:50:57.869520 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:50:57.901135 1639474 cri.go:89] found id: ""
	I1216 06:50:57.901151 1639474 logs.go:282] 0 containers: []
	W1216 06:50:57.901158 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:50:57.901163 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:50:57.901226 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:50:57.925331 1639474 cri.go:89] found id: ""
	I1216 06:50:57.925345 1639474 logs.go:282] 0 containers: []
	W1216 06:50:57.925352 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:50:57.925357 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:50:57.925415 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:50:57.950341 1639474 cri.go:89] found id: ""
	I1216 06:50:57.950356 1639474 logs.go:282] 0 containers: []
	W1216 06:50:57.950363 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:50:57.950367 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:50:57.950426 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:50:57.975123 1639474 cri.go:89] found id: ""
	I1216 06:50:57.975137 1639474 logs.go:282] 0 containers: []
	W1216 06:50:57.975144 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:50:57.975149 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:50:57.975208 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:50:58.004659 1639474 cri.go:89] found id: ""
	I1216 06:50:58.004676 1639474 logs.go:282] 0 containers: []
	W1216 06:50:58.004684 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:50:58.004689 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:50:58.004760 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:50:58.030464 1639474 cri.go:89] found id: ""
	I1216 06:50:58.030478 1639474 logs.go:282] 0 containers: []
	W1216 06:50:58.030485 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:50:58.030491 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:50:58.030552 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:50:58.056049 1639474 cri.go:89] found id: ""
	I1216 06:50:58.056063 1639474 logs.go:282] 0 containers: []
	W1216 06:50:58.056071 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:50:58.056079 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:50:58.056091 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:50:58.124116 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:50:58.124137 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:50:58.139439 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:50:58.139455 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:50:58.229902 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:50:58.220695   21068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:50:58.221180   21068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:50:58.222906   21068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:50:58.223593   21068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:50:58.225247   21068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:50:58.220695   21068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:50:58.221180   21068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:50:58.222906   21068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:50:58.223593   21068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:50:58.225247   21068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:50:58.229914 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:50:58.229925 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:50:58.301956 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:50:58.301977 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1216 06:50:58.330306 1639474 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000152952s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1216 06:50:58.330348 1639474 out.go:285] * 
	W1216 06:50:58.330448 1639474 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000152952s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 06:50:58.330506 1639474 out.go:285] * 
	W1216 06:50:58.332927 1639474 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 06:50:58.338210 1639474 out.go:203] 
	W1216 06:50:58.341028 1639474 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000152952s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 06:50:58.341164 1639474 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 06:50:58.341212 1639474 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 06:50:58.344413 1639474 out.go:203] 
	
	
	==> CRI-O <==
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553471769Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553507896Z" level=info msg="Starting seccomp notifier watcher"
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553554657Z" level=info msg="Create NRI interface"
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553657485Z" level=info msg="built-in NRI default validator is disabled"
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553665107Z" level=info msg="runtime interface created"
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553674699Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553680746Z" level=info msg="runtime interface starting up..."
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553686137Z" level=info msg="starting plugins..."
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553698814Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553771561Z" level=info msg="No systemd watchdog enabled"
	Dec 16 06:38:48 functional-364120 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 16 06:42:54 functional-364120 crio[9872]: time="2025-12-16T06:42:54.046654305Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=2afa36a7-e595-4e9e-9866-100014f74db0 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:42:54 functional-364120 crio[9872]: time="2025-12-16T06:42:54.047561496Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=bfee085e-d788-43aa-852e-e818968557f8 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:42:54 functional-364120 crio[9872]: time="2025-12-16T06:42:54.048165668Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=8209edd3-2ad3-4cea-9d15-760a1b94c10d name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:42:54 functional-364120 crio[9872]: time="2025-12-16T06:42:54.048839782Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=f38b3b25-171e-488b-9dbb-3a4615d07ce7 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:42:54 functional-364120 crio[9872]: time="2025-12-16T06:42:54.049385123Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=674d3a91-05c7-4375-a638-2bb51d77e82a name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:42:54 functional-364120 crio[9872]: time="2025-12-16T06:42:54.049934157Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=d7315967-45e5-4ab2-b579-15a88e3c5cf5 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:42:54 functional-364120 crio[9872]: time="2025-12-16T06:42:54.050441213Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=d2d27746-f739-4711-a521-d245b78e775c name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.981581513Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=cc27c34f-1129-41fd-83b5-8698b0697603 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.982462832Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=f632e983-ad57-48b2-98c3-8802e4b6bb91 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.982972654Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=4d99c7a4-52a2-4a4f-9569-9d8a29ee230d name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.983463866Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=824a4ba3-63ed-49ce-a194-3bf34f462483 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.983972891Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=52cc52f0-f1ca-4fc4-a91a-13dd8c19e754 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.984501125Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=baf81d2d-269c-44fd-a82c-811876adf596 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.984974015Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=88fe0e4e-4ea7-4b38-a635-f3138f370377 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:53:32.239271   23471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:53:32.239903   23471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:53:32.241494   23471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:53:32.241867   23471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:53:32.243360   23471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 06:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec16 06:13] overlayfs: idmapped layers are currently not supported
	[Dec16 06:19] overlayfs: idmapped layers are currently not supported
	[Dec16 06:20] overlayfs: idmapped layers are currently not supported
	[Dec16 06:38] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 06:53:32 up  9:36,  0 user,  load average: 0.55, 0.25, 0.43
	Linux functional-364120 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 06:53:29 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:53:30 functional-364120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1165.
	Dec 16 06:53:30 functional-364120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:53:30 functional-364120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:53:30 functional-364120 kubelet[23328]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:53:30 functional-364120 kubelet[23328]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:53:30 functional-364120 kubelet[23328]: E1216 06:53:30.458847   23328 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:53:30 functional-364120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:53:30 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:53:31 functional-364120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1166.
	Dec 16 06:53:31 functional-364120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:53:31 functional-364120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:53:31 functional-364120 kubelet[23364]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:53:31 functional-364120 kubelet[23364]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:53:31 functional-364120 kubelet[23364]: E1216 06:53:31.212946   23364 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:53:31 functional-364120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:53:31 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:53:31 functional-364120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1167.
	Dec 16 06:53:31 functional-364120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:53:31 functional-364120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:53:31 functional-364120 kubelet[23394]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:53:31 functional-364120 kubelet[23394]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:53:31 functional-364120 kubelet[23394]: E1216 06:53:31.966281   23394 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:53:31 functional-364120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:53:31 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-364120 -n functional-364120
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-364120 -n functional-364120: exit status 2 (340.6454ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-364120" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-364120 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-364120 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (54.152474ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-364120 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-364120 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-364120 describe po hello-node-connect: exit status 1 (70.221184ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-364120 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-364120 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-364120 logs -l app=hello-node-connect: exit status 1 (62.208876ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-364120 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-364120 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-364120 describe svc hello-node-connect: exit status 1 (81.5195ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-364120 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-364120
helpers_test.go:244: (dbg) docker inspect functional-364120:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf",
	        "Created": "2025-12-16T06:24:05.281524036Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1628059,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T06:24:05.346294886Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/hostname",
	        "HostsPath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/hosts",
	        "LogPath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf-json.log",
	        "Name": "/functional-364120",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-364120:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-364120",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf",
	                "LowerDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752-init/diff:/var/lib/docker/overlay2/bf9e5e3f04a34ae52d17b5e81aeacb3854428b2bda7b4fcb7e1d86558db759ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752/merged",
	                "UpperDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752/diff",
	                "WorkDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-364120",
	                "Source": "/var/lib/docker/volumes/functional-364120/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-364120",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-364120",
	                "name.minikube.sigs.k8s.io": "functional-364120",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ca8e444af5ea4dc220aae407b23205e89ee2c7bfaf0d7da28c0fa8a6e9438a0b",
	            "SandboxKey": "/var/run/docker/netns/ca8e444af5ea",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34260"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34261"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34264"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34262"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34263"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-364120": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:28:ec:c3:f0:f5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a6847428577f52c75d7f6ab7a92b3395c1204da1608971d5af98d3898a2210da",
	                    "EndpointID": "e579fd8a0ba117da836073d37b7f617933568bedfc3fb52e056b4772aaddecbf",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-364120",
	                        "8e0dcfb5d015"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-364120 -n functional-364120
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-364120 -n functional-364120: exit status 2 (296.748701ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cache   │ functional-364120 cache reload                                                                                                                               │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ ssh     │ functional-364120 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                      │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │ 16 Dec 25 06:38 UTC │
	│ kubectl │ functional-364120 kubectl -- --context functional-364120 get pods                                                                                            │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │                     │
	│ start   │ -p functional-364120 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                     │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:38 UTC │                     │
	│ cp      │ functional-364120 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                           │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:51 UTC │ 16 Dec 25 06:51 UTC │
	│ config  │ functional-364120 config unset cpus                                                                                                                          │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:51 UTC │ 16 Dec 25 06:51 UTC │
	│ config  │ functional-364120 config get cpus                                                                                                                            │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:51 UTC │                     │
	│ config  │ functional-364120 config set cpus 2                                                                                                                          │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:51 UTC │ 16 Dec 25 06:51 UTC │
	│ config  │ functional-364120 config get cpus                                                                                                                            │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:51 UTC │ 16 Dec 25 06:51 UTC │
	│ config  │ functional-364120 config unset cpus                                                                                                                          │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:51 UTC │ 16 Dec 25 06:51 UTC │
	│ ssh     │ functional-364120 ssh -n functional-364120 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:51 UTC │ 16 Dec 25 06:51 UTC │
	│ config  │ functional-364120 config get cpus                                                                                                                            │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:51 UTC │                     │
	│ ssh     │ functional-364120 ssh echo hello                                                                                                                             │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:51 UTC │ 16 Dec 25 06:51 UTC │
	│ cp      │ functional-364120 cp functional-364120:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp2475148058/001/cp-test.txt │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:51 UTC │ 16 Dec 25 06:51 UTC │
	│ ssh     │ functional-364120 ssh cat /etc/hostname                                                                                                                      │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:51 UTC │ 16 Dec 25 06:51 UTC │
	│ ssh     │ functional-364120 ssh -n functional-364120 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:51 UTC │ 16 Dec 25 06:51 UTC │
	│ tunnel  │ functional-364120 tunnel --alsologtostderr                                                                                                                   │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:51 UTC │                     │
	│ tunnel  │ functional-364120 tunnel --alsologtostderr                                                                                                                   │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:51 UTC │                     │
	│ cp      │ functional-364120 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                    │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:51 UTC │ 16 Dec 25 06:51 UTC │
	│ tunnel  │ functional-364120 tunnel --alsologtostderr                                                                                                                   │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:51 UTC │                     │
	│ ssh     │ functional-364120 ssh -n functional-364120 sudo cat /tmp/does/not/exist/cp-test.txt                                                                          │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:51 UTC │ 16 Dec 25 06:51 UTC │
	│ addons  │ functional-364120 addons list                                                                                                                                │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ addons  │ functional-364120 addons list -o json                                                                                                                        │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 06:38:45
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 06:38:45.382114 1639474 out.go:360] Setting OutFile to fd 1 ...
	I1216 06:38:45.382275 1639474 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:38:45.382279 1639474 out.go:374] Setting ErrFile to fd 2...
	I1216 06:38:45.382283 1639474 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:38:45.382644 1639474 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 06:38:45.383081 1639474 out.go:368] Setting JSON to false
	I1216 06:38:45.383946 1639474 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":33677,"bootTime":1765833449,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1216 06:38:45.384032 1639474 start.go:143] virtualization:  
	I1216 06:38:45.387610 1639474 out.go:179] * [functional-364120] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 06:38:45.391422 1639474 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 06:38:45.391485 1639474 notify.go:221] Checking for updates...
	I1216 06:38:45.397275 1639474 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 06:38:45.400538 1639474 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:38:45.403348 1639474 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube
	I1216 06:38:45.406183 1639474 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 06:38:45.410019 1639474 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 06:38:45.413394 1639474 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 06:38:45.413485 1639474 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 06:38:45.451796 1639474 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 06:38:45.451901 1639474 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:38:45.529304 1639474 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-16 06:38:45.519310041 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 06:38:45.529400 1639474 docker.go:319] overlay module found
	I1216 06:38:45.532456 1639474 out.go:179] * Using the docker driver based on existing profile
	I1216 06:38:45.535342 1639474 start.go:309] selected driver: docker
	I1216 06:38:45.535352 1639474 start.go:927] validating driver "docker" against &{Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:38:45.535432 1639474 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 06:38:45.535555 1639474 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:38:45.605792 1639474 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-16 06:38:45.594564391 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 06:38:45.606168 1639474 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 06:38:45.606189 1639474 cni.go:84] Creating CNI manager for ""
	I1216 06:38:45.606237 1639474 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 06:38:45.606285 1639474 start.go:353] cluster config:
	{Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:38:45.611347 1639474 out.go:179] * Starting "functional-364120" primary control-plane node in "functional-364120" cluster
	I1216 06:38:45.614388 1639474 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 06:38:45.617318 1639474 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 06:38:45.620204 1639474 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 06:38:45.620247 1639474 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1216 06:38:45.620256 1639474 cache.go:65] Caching tarball of preloaded images
	I1216 06:38:45.620287 1639474 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 06:38:45.620351 1639474 preload.go:238] Found /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 06:38:45.620360 1639474 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1216 06:38:45.620487 1639474 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/config.json ...
	I1216 06:38:45.639567 1639474 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 06:38:45.639578 1639474 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 06:38:45.639591 1639474 cache.go:243] Successfully downloaded all kic artifacts
	I1216 06:38:45.639630 1639474 start.go:360] acquireMachinesLock for functional-364120: {Name:mkbf042218fd4d1baa11f8b1e4a71170f4ad9912 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:38:45.639687 1639474 start.go:364] duration metric: took 37.908µs to acquireMachinesLock for "functional-364120"
	I1216 06:38:45.639706 1639474 start.go:96] Skipping create...Using existing machine configuration
	I1216 06:38:45.639711 1639474 fix.go:54] fixHost starting: 
	I1216 06:38:45.639996 1639474 cli_runner.go:164] Run: docker container inspect functional-364120 --format={{.State.Status}}
	I1216 06:38:45.656952 1639474 fix.go:112] recreateIfNeeded on functional-364120: state=Running err=<nil>
	W1216 06:38:45.656970 1639474 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 06:38:45.660116 1639474 out.go:252] * Updating the running docker "functional-364120" container ...
	I1216 06:38:45.660138 1639474 machine.go:94] provisionDockerMachine start ...
	I1216 06:38:45.660218 1639474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:38:45.677387 1639474 main.go:143] libmachine: Using SSH client type: native
	I1216 06:38:45.677705 1639474 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1216 06:38:45.677711 1639474 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 06:38:45.812247 1639474 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-364120
	
	I1216 06:38:45.812262 1639474 ubuntu.go:182] provisioning hostname "functional-364120"
	I1216 06:38:45.812325 1639474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:38:45.830038 1639474 main.go:143] libmachine: Using SSH client type: native
	I1216 06:38:45.830333 1639474 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1216 06:38:45.830342 1639474 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-364120 && echo "functional-364120" | sudo tee /etc/hostname
	I1216 06:38:45.969440 1639474 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-364120
	
	I1216 06:38:45.969519 1639474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:38:45.987438 1639474 main.go:143] libmachine: Using SSH client type: native
	I1216 06:38:45.987738 1639474 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1216 06:38:45.987751 1639474 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-364120' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-364120/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-364120' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 06:38:46.120750 1639474 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 06:38:46.120766 1639474 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-1596013/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-1596013/.minikube}
	I1216 06:38:46.120795 1639474 ubuntu.go:190] setting up certificates
	I1216 06:38:46.120811 1639474 provision.go:84] configureAuth start
	I1216 06:38:46.120880 1639474 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-364120
	I1216 06:38:46.139450 1639474 provision.go:143] copyHostCerts
	I1216 06:38:46.139518 1639474 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem, removing ...
	I1216 06:38:46.139535 1639474 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 06:38:46.139611 1639474 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem (1078 bytes)
	I1216 06:38:46.139701 1639474 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem, removing ...
	I1216 06:38:46.139705 1639474 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 06:38:46.139730 1639474 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem (1123 bytes)
	I1216 06:38:46.139777 1639474 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem, removing ...
	I1216 06:38:46.139780 1639474 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 06:38:46.139802 1639474 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem (1675 bytes)
	I1216 06:38:46.139846 1639474 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem org=jenkins.functional-364120 san=[127.0.0.1 192.168.49.2 functional-364120 localhost minikube]
	I1216 06:38:46.453267 1639474 provision.go:177] copyRemoteCerts
	I1216 06:38:46.453323 1639474 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 06:38:46.453367 1639474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:38:46.472384 1639474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:38:46.568304 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 06:38:46.585458 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 06:38:46.602822 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 06:38:46.619947 1639474 provision.go:87] duration metric: took 499.122604ms to configureAuth
	I1216 06:38:46.619964 1639474 ubuntu.go:206] setting minikube options for container-runtime
	I1216 06:38:46.620160 1639474 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 06:38:46.620252 1639474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:38:46.637350 1639474 main.go:143] libmachine: Using SSH client type: native
	I1216 06:38:46.637660 1639474 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1216 06:38:46.637671 1639474 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 06:38:46.957629 1639474 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 06:38:46.957641 1639474 machine.go:97] duration metric: took 1.297496853s to provisionDockerMachine
	I1216 06:38:46.957652 1639474 start.go:293] postStartSetup for "functional-364120" (driver="docker")
	I1216 06:38:46.957670 1639474 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 06:38:46.957741 1639474 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 06:38:46.957790 1639474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:38:46.978202 1639474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:38:47.080335 1639474 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 06:38:47.083578 1639474 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 06:38:47.083597 1639474 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 06:38:47.083607 1639474 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/addons for local assets ...
	I1216 06:38:47.083662 1639474 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/files for local assets ...
	I1216 06:38:47.083735 1639474 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> 15992552.pem in /etc/ssl/certs
	I1216 06:38:47.083808 1639474 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/test/nested/copy/1599255/hosts -> hosts in /etc/test/nested/copy/1599255
	I1216 06:38:47.083855 1639474 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1599255
	I1216 06:38:47.091346 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 06:38:47.108874 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/test/nested/copy/1599255/hosts --> /etc/test/nested/copy/1599255/hosts (40 bytes)
	I1216 06:38:47.126774 1639474 start.go:296] duration metric: took 169.103296ms for postStartSetup
	I1216 06:38:47.126870 1639474 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 06:38:47.126918 1639474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:38:47.145224 1639474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:38:47.237421 1639474 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 06:38:47.242526 1639474 fix.go:56] duration metric: took 1.602809118s for fixHost
	I1216 06:38:47.242542 1639474 start.go:83] releasing machines lock for "functional-364120", held for 1.602847814s
	I1216 06:38:47.242635 1639474 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-364120
	I1216 06:38:47.260121 1639474 ssh_runner.go:195] Run: cat /version.json
	I1216 06:38:47.260167 1639474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:38:47.260174 1639474 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 06:38:47.260224 1639474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
	I1216 06:38:47.277503 1639474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:38:47.283903 1639474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
	I1216 06:38:47.464356 1639474 ssh_runner.go:195] Run: systemctl --version
	I1216 06:38:47.476410 1639474 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 06:38:47.514461 1639474 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 06:38:47.518820 1639474 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 06:38:47.518882 1639474 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 06:38:47.526809 1639474 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 06:38:47.526823 1639474 start.go:496] detecting cgroup driver to use...
	I1216 06:38:47.526855 1639474 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:38:47.526909 1639474 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 06:38:47.542915 1639474 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:38:47.556456 1639474 docker.go:218] disabling cri-docker service (if available) ...
	I1216 06:38:47.556532 1639474 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 06:38:47.572387 1639474 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 06:38:47.585623 1639474 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 06:38:47.693830 1639474 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 06:38:47.836192 1639474 docker.go:234] disabling docker service ...
	I1216 06:38:47.836253 1639474 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 06:38:47.851681 1639474 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 06:38:47.865315 1639474 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 06:38:47.985223 1639474 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 06:38:48.104393 1639474 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 06:38:48.118661 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 06:38:48.136892 1639474 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 06:38:48.136961 1639474 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:38:48.147508 1639474 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 06:38:48.147579 1639474 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:38:48.156495 1639474 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:38:48.165780 1639474 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:38:48.174392 1639474 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 06:38:48.182433 1639474 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:38:48.191004 1639474 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:38:48.198914 1639474 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 06:38:48.207365 1639474 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 06:38:48.214548 1639474 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 06:38:48.221727 1639474 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:38:48.346771 1639474 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 06:38:48.562751 1639474 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 06:38:48.562822 1639474 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 06:38:48.566564 1639474 start.go:564] Will wait 60s for crictl version
	I1216 06:38:48.566626 1639474 ssh_runner.go:195] Run: which crictl
	I1216 06:38:48.570268 1639474 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 06:38:48.600286 1639474 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 06:38:48.600360 1639474 ssh_runner.go:195] Run: crio --version
	I1216 06:38:48.630102 1639474 ssh_runner.go:195] Run: crio --version
	I1216 06:38:48.662511 1639474 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1216 06:38:48.665401 1639474 cli_runner.go:164] Run: docker network inspect functional-364120 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 06:38:48.681394 1639474 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 06:38:48.688428 1639474 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1216 06:38:48.691264 1639474 kubeadm.go:884] updating cluster {Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 06:38:48.691424 1639474 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 06:38:48.691501 1639474 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 06:38:48.730823 1639474 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 06:38:48.730835 1639474 crio.go:433] Images already preloaded, skipping extraction
	I1216 06:38:48.730892 1639474 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 06:38:48.756054 1639474 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 06:38:48.756075 1639474 cache_images.go:86] Images are preloaded, skipping loading
	I1216 06:38:48.756081 1639474 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1216 06:38:48.756185 1639474 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-364120 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 06:38:48.756284 1639474 ssh_runner.go:195] Run: crio config
	I1216 06:38:48.821920 1639474 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1216 06:38:48.821940 1639474 cni.go:84] Creating CNI manager for ""
	I1216 06:38:48.821953 1639474 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 06:38:48.821961 1639474 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 06:38:48.821989 1639474 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-364120 NodeName:functional-364120 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 06:38:48.822118 1639474 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-364120"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 06:38:48.822186 1639474 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 06:38:48.830098 1639474 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 06:38:48.830166 1639474 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 06:38:48.837393 1639474 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1216 06:38:48.849769 1639474 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 06:38:48.862224 1639474 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1216 06:38:48.875020 1639474 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 06:38:48.878641 1639474 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:38:48.988462 1639474 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:38:49.398022 1639474 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120 for IP: 192.168.49.2
	I1216 06:38:49.398033 1639474 certs.go:195] generating shared ca certs ...
	I1216 06:38:49.398047 1639474 certs.go:227] acquiring lock for ca certs: {Name:mkbf72d2e438185e2867d262e148d82e5455cccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:38:49.398216 1639474 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key
	I1216 06:38:49.398259 1639474 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key
	I1216 06:38:49.398266 1639474 certs.go:257] generating profile certs ...
	I1216 06:38:49.398355 1639474 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.key
	I1216 06:38:49.398397 1639474 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.key.a6be103a
	I1216 06:38:49.398442 1639474 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.key
	I1216 06:38:49.398557 1639474 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem (1338 bytes)
	W1216 06:38:49.398591 1639474 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255_empty.pem, impossibly tiny 0 bytes
	I1216 06:38:49.398598 1639474 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 06:38:49.398627 1639474 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem (1078 bytes)
	I1216 06:38:49.398648 1639474 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem (1123 bytes)
	I1216 06:38:49.398673 1639474 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem (1675 bytes)
	I1216 06:38:49.398722 1639474 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 06:38:49.399378 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 06:38:49.420435 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 06:38:49.440537 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 06:38:49.460786 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 06:38:49.480628 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 06:38:49.497487 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 06:38:49.514939 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 06:38:49.532313 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 06:38:49.550215 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem --> /usr/share/ca-certificates/1599255.pem (1338 bytes)
	I1216 06:38:49.580225 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /usr/share/ca-certificates/15992552.pem (1708 bytes)
	I1216 06:38:49.597583 1639474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 06:38:49.615627 1639474 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 06:38:49.629067 1639474 ssh_runner.go:195] Run: openssl version
	I1216 06:38:49.635264 1639474 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1599255.pem
	I1216 06:38:49.642707 1639474 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1599255.pem /etc/ssl/certs/1599255.pem
	I1216 06:38:49.650527 1639474 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1599255.pem
	I1216 06:38:49.654313 1639474 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 06:24 /usr/share/ca-certificates/1599255.pem
	I1216 06:38:49.654369 1639474 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1599255.pem
	I1216 06:38:49.695142 1639474 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 06:38:49.702542 1639474 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15992552.pem
	I1216 06:38:49.709833 1639474 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15992552.pem /etc/ssl/certs/15992552.pem
	I1216 06:38:49.717202 1639474 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15992552.pem
	I1216 06:38:49.720835 1639474 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 06:24 /usr/share/ca-certificates/15992552.pem
	I1216 06:38:49.720891 1639474 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15992552.pem
	I1216 06:38:49.762100 1639474 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 06:38:49.769702 1639474 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:38:49.777475 1639474 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 06:38:49.785134 1639474 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:38:49.789017 1639474 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:38:49.789075 1639474 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:38:49.830097 1639474 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 06:38:49.837887 1639474 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 06:38:49.841718 1639474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 06:38:49.883003 1639474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 06:38:49.923792 1639474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 06:38:49.964873 1639474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 06:38:50.009367 1639474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 06:38:50.051701 1639474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 06:38:50.093263 1639474 kubeadm.go:401] StartCluster: {Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:38:50.093349 1639474 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 06:38:50.093423 1639474 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 06:38:50.120923 1639474 cri.go:89] found id: ""
	I1216 06:38:50.120988 1639474 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 06:38:50.128935 1639474 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 06:38:50.128944 1639474 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 06:38:50.129001 1639474 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 06:38:50.136677 1639474 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 06:38:50.137223 1639474 kubeconfig.go:125] found "functional-364120" server: "https://192.168.49.2:8441"
	I1216 06:38:50.138591 1639474 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 06:38:50.148403 1639474 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-16 06:24:13.753381452 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-16 06:38:48.871691407 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1216 06:38:50.148423 1639474 kubeadm.go:1161] stopping kube-system containers ...
	I1216 06:38:50.148434 1639474 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 06:38:50.148512 1639474 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 06:38:50.182168 1639474 cri.go:89] found id: ""
	I1216 06:38:50.182231 1639474 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 06:38:50.201521 1639474 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 06:38:50.209281 1639474 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 16 06:28 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 16 06:28 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec 16 06:28 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 16 06:28 /etc/kubernetes/scheduler.conf
	
	I1216 06:38:50.209338 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 06:38:50.217195 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 06:38:50.224648 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 06:38:50.224702 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 06:38:50.231990 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 06:38:50.239836 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 06:38:50.239894 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 06:38:50.247352 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 06:38:50.254862 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 06:38:50.254916 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 06:38:50.262178 1639474 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 06:38:50.270092 1639474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 06:38:50.316982 1639474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 06:38:51.327287 1639474 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.010279379s)
	I1216 06:38:51.327357 1639474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 06:38:51.524152 1639474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 06:38:51.584718 1639474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 06:38:51.627519 1639474 api_server.go:52] waiting for apiserver process to appear ...
	I1216 06:38:51.627603 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:52.127996 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:52.628298 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:53.128739 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:53.628621 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:54.128741 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:54.627831 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:55.128517 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:55.628413 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:56.127788 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:56.627801 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:57.128288 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:57.628401 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:58.128329 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:58.627998 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:59.127831 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:38:59.628547 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:00.128439 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:00.628540 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:01.128146 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:01.627790 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:02.128721 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:02.628766 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:03.127780 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:03.628489 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:04.128439 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:04.627784 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:05.128544 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:05.627790 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:06.128535 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:06.627955 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:07.127765 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:07.627817 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:08.128692 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:08.628069 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:09.127788 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:09.627921 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:10.128708 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:10.627689 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:11.127821 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:11.627890 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:12.127687 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:12.628412 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:13.128182 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:13.627796 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:14.128611 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:14.628298 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:15.127795 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:15.628147 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:16.127806 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:16.627762 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:17.127677 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:17.628043 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:18.127752 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:18.627697 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:19.128437 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:19.627779 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:20.128353 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:20.628739 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:21.128542 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:21.628449 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:22.127780 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:22.628679 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:23.128464 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:23.628609 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:24.127698 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:24.628073 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:25.128615 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:25.627743 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:26.127794 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:26.628605 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:27.128439 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:27.627806 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:28.128571 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:28.628042 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:29.128637 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:29.627742 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:30.128694 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:30.627803 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:31.127790 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:31.628497 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:32.127786 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:32.627780 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:33.127788 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:33.627974 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:34.128440 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:34.628685 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:35.128622 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:35.628715 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:36.128328 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:36.628129 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:37.127678 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:37.628187 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:38.128724 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:38.627765 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:39.127823 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:39.627834 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:40.128417 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:40.628784 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:41.128501 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:41.628458 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:42.128381 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:42.627888 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:43.128387 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:43.627769 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:44.128638 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:44.627687 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:45.128571 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:45.628346 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:46.128443 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:46.628500 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:47.128632 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:47.628608 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:48.128412 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:48.628099 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:49.128601 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:49.627888 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:50.127801 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:50.628098 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:51.127749 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:51.627803 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:39:51.627880 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:39:51.662321 1639474 cri.go:89] found id: ""
	I1216 06:39:51.662334 1639474 logs.go:282] 0 containers: []
	W1216 06:39:51.662341 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:39:51.662347 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:39:51.662418 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:39:51.693006 1639474 cri.go:89] found id: ""
	I1216 06:39:51.693020 1639474 logs.go:282] 0 containers: []
	W1216 06:39:51.693027 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:39:51.693032 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:39:51.693091 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:39:51.719156 1639474 cri.go:89] found id: ""
	I1216 06:39:51.719169 1639474 logs.go:282] 0 containers: []
	W1216 06:39:51.719176 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:39:51.719181 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:39:51.719237 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:39:51.745402 1639474 cri.go:89] found id: ""
	I1216 06:39:51.745416 1639474 logs.go:282] 0 containers: []
	W1216 06:39:51.745423 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:39:51.745429 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:39:51.745492 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:39:51.771770 1639474 cri.go:89] found id: ""
	I1216 06:39:51.771784 1639474 logs.go:282] 0 containers: []
	W1216 06:39:51.771791 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:39:51.771796 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:39:51.771854 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:39:51.797172 1639474 cri.go:89] found id: ""
	I1216 06:39:51.797186 1639474 logs.go:282] 0 containers: []
	W1216 06:39:51.797192 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:39:51.797198 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:39:51.797257 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:39:51.825478 1639474 cri.go:89] found id: ""
	I1216 06:39:51.825492 1639474 logs.go:282] 0 containers: []
	W1216 06:39:51.825499 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:39:51.825506 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:39:51.825516 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:39:51.897574 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:39:51.897593 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:39:51.925635 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:39:51.925652 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:39:51.993455 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:39:51.993477 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:39:52.027866 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:39:52.027883 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:39:52.096535 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:39:52.087042   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:52.087741   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:52.089643   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:52.090378   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:52.092367   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:39:52.087042   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:52.087741   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:52.089643   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:52.090378   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:52.092367   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:39:54.597178 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:54.607445 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:39:54.607507 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:39:54.634705 1639474 cri.go:89] found id: ""
	I1216 06:39:54.634719 1639474 logs.go:282] 0 containers: []
	W1216 06:39:54.634733 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:39:54.634739 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:39:54.634800 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:39:54.668209 1639474 cri.go:89] found id: ""
	I1216 06:39:54.668223 1639474 logs.go:282] 0 containers: []
	W1216 06:39:54.668230 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:39:54.668235 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:39:54.668293 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:39:54.703300 1639474 cri.go:89] found id: ""
	I1216 06:39:54.703314 1639474 logs.go:282] 0 containers: []
	W1216 06:39:54.703321 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:39:54.703326 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:39:54.703385 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:39:54.732154 1639474 cri.go:89] found id: ""
	I1216 06:39:54.732168 1639474 logs.go:282] 0 containers: []
	W1216 06:39:54.732175 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:39:54.732180 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:39:54.732241 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:39:54.758222 1639474 cri.go:89] found id: ""
	I1216 06:39:54.758237 1639474 logs.go:282] 0 containers: []
	W1216 06:39:54.758244 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:39:54.758249 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:39:54.758309 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:39:54.783433 1639474 cri.go:89] found id: ""
	I1216 06:39:54.783456 1639474 logs.go:282] 0 containers: []
	W1216 06:39:54.783463 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:39:54.783474 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:39:54.783544 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:39:54.811264 1639474 cri.go:89] found id: ""
	I1216 06:39:54.811277 1639474 logs.go:282] 0 containers: []
	W1216 06:39:54.811284 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:39:54.811291 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:39:54.811302 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:39:54.876784 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:39:54.876805 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:39:54.891733 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:39:54.891749 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:39:54.963951 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:39:54.956444   11053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:54.956899   11053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:54.958408   11053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:54.958719   11053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:54.960134   11053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:39:54.956444   11053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:54.956899   11053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:54.958408   11053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:54.958719   11053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:54.960134   11053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:39:54.963962 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:39:54.963975 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:39:55.036358 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:39:55.036380 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:39:57.569339 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:39:57.579596 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:39:57.579659 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:39:57.604959 1639474 cri.go:89] found id: ""
	I1216 06:39:57.604973 1639474 logs.go:282] 0 containers: []
	W1216 06:39:57.604980 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:39:57.604985 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:39:57.605045 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:39:57.630710 1639474 cri.go:89] found id: ""
	I1216 06:39:57.630725 1639474 logs.go:282] 0 containers: []
	W1216 06:39:57.630731 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:39:57.630736 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:39:57.630794 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:39:57.662734 1639474 cri.go:89] found id: ""
	I1216 06:39:57.662748 1639474 logs.go:282] 0 containers: []
	W1216 06:39:57.662756 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:39:57.662773 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:39:57.662838 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:39:57.699847 1639474 cri.go:89] found id: ""
	I1216 06:39:57.699868 1639474 logs.go:282] 0 containers: []
	W1216 06:39:57.699875 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:39:57.699880 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:39:57.699941 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:39:57.726549 1639474 cri.go:89] found id: ""
	I1216 06:39:57.726563 1639474 logs.go:282] 0 containers: []
	W1216 06:39:57.726570 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:39:57.726575 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:39:57.726639 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:39:57.752583 1639474 cri.go:89] found id: ""
	I1216 06:39:57.752597 1639474 logs.go:282] 0 containers: []
	W1216 06:39:57.752604 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:39:57.752609 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:39:57.752667 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:39:57.780752 1639474 cri.go:89] found id: ""
	I1216 06:39:57.780767 1639474 logs.go:282] 0 containers: []
	W1216 06:39:57.780774 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:39:57.780782 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:39:57.780793 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:39:57.846931 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:39:57.846952 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:39:57.862606 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:39:57.862623 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:39:57.928743 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:39:57.917946   11160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:57.918582   11160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:57.920325   11160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:57.920838   11160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:57.922560   11160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:39:57.917946   11160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:57.918582   11160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:57.920325   11160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:57.920838   11160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:39:57.922560   11160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:39:57.928764 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:39:57.928775 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:39:57.997232 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:39:57.997254 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:00.537687 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:00.558059 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:00.558144 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:00.594907 1639474 cri.go:89] found id: ""
	I1216 06:40:00.594929 1639474 logs.go:282] 0 containers: []
	W1216 06:40:00.594939 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:00.594953 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:00.595036 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:00.628243 1639474 cri.go:89] found id: ""
	I1216 06:40:00.628272 1639474 logs.go:282] 0 containers: []
	W1216 06:40:00.628280 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:00.628294 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:00.628377 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:00.667757 1639474 cri.go:89] found id: ""
	I1216 06:40:00.667773 1639474 logs.go:282] 0 containers: []
	W1216 06:40:00.667791 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:00.667797 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:00.667873 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:00.707304 1639474 cri.go:89] found id: ""
	I1216 06:40:00.707319 1639474 logs.go:282] 0 containers: []
	W1216 06:40:00.707327 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:00.707333 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:00.707413 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:00.742620 1639474 cri.go:89] found id: ""
	I1216 06:40:00.742636 1639474 logs.go:282] 0 containers: []
	W1216 06:40:00.742644 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:00.742650 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:00.742727 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:00.772404 1639474 cri.go:89] found id: ""
	I1216 06:40:00.772421 1639474 logs.go:282] 0 containers: []
	W1216 06:40:00.772429 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:00.772435 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:00.772526 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:00.800238 1639474 cri.go:89] found id: ""
	I1216 06:40:00.800253 1639474 logs.go:282] 0 containers: []
	W1216 06:40:00.800260 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:00.800268 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:00.800280 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:00.866967 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:00.866989 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:00.883111 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:00.883127 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:00.951359 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:00.942477   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:00.943153   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:00.944836   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:00.945488   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:00.947367   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:00.942477   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:00.943153   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:00.944836   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:00.945488   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:00.947367   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:00.951371 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:00.951382 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:01.020844 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:01.020870 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:03.552704 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:03.563452 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:03.563545 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:03.588572 1639474 cri.go:89] found id: ""
	I1216 06:40:03.588585 1639474 logs.go:282] 0 containers: []
	W1216 06:40:03.588592 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:03.588598 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:03.588665 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:03.617698 1639474 cri.go:89] found id: ""
	I1216 06:40:03.617712 1639474 logs.go:282] 0 containers: []
	W1216 06:40:03.617719 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:03.617724 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:03.617784 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:03.643270 1639474 cri.go:89] found id: ""
	I1216 06:40:03.643285 1639474 logs.go:282] 0 containers: []
	W1216 06:40:03.643291 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:03.643296 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:03.643356 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:03.679135 1639474 cri.go:89] found id: ""
	I1216 06:40:03.679148 1639474 logs.go:282] 0 containers: []
	W1216 06:40:03.679155 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:03.679160 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:03.679217 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:03.707978 1639474 cri.go:89] found id: ""
	I1216 06:40:03.707991 1639474 logs.go:282] 0 containers: []
	W1216 06:40:03.707998 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:03.708003 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:03.708071 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:03.741796 1639474 cri.go:89] found id: ""
	I1216 06:40:03.741821 1639474 logs.go:282] 0 containers: []
	W1216 06:40:03.741827 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:03.741832 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:03.741899 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:03.767959 1639474 cri.go:89] found id: ""
	I1216 06:40:03.767983 1639474 logs.go:282] 0 containers: []
	W1216 06:40:03.767991 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:03.767998 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:03.768009 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:03.833601 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:03.833622 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:03.848136 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:03.848154 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:03.911646 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:03.902948   11373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:03.903628   11373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:03.905247   11373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:03.905737   11373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:03.907239   11373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:03.902948   11373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:03.903628   11373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:03.905247   11373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:03.905737   11373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:03.907239   11373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:03.911661 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:03.911672 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:03.980874 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:03.980894 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:06.512671 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:06.522859 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:06.522944 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:06.552384 1639474 cri.go:89] found id: ""
	I1216 06:40:06.552399 1639474 logs.go:282] 0 containers: []
	W1216 06:40:06.552406 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:06.552411 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:06.552492 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:06.577262 1639474 cri.go:89] found id: ""
	I1216 06:40:06.577276 1639474 logs.go:282] 0 containers: []
	W1216 06:40:06.577293 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:06.577299 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:06.577357 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:06.603757 1639474 cri.go:89] found id: ""
	I1216 06:40:06.603772 1639474 logs.go:282] 0 containers: []
	W1216 06:40:06.603779 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:06.603784 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:06.603850 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:06.629717 1639474 cri.go:89] found id: ""
	I1216 06:40:06.629732 1639474 logs.go:282] 0 containers: []
	W1216 06:40:06.629751 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:06.629756 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:06.629846 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:06.665006 1639474 cri.go:89] found id: ""
	I1216 06:40:06.665031 1639474 logs.go:282] 0 containers: []
	W1216 06:40:06.665039 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:06.665044 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:06.665109 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:06.698777 1639474 cri.go:89] found id: ""
	I1216 06:40:06.698791 1639474 logs.go:282] 0 containers: []
	W1216 06:40:06.698807 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:06.698813 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:06.698879 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:06.727424 1639474 cri.go:89] found id: ""
	I1216 06:40:06.727448 1639474 logs.go:282] 0 containers: []
	W1216 06:40:06.727455 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:06.727464 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:06.727475 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:06.758535 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:06.758552 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:06.827915 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:06.827944 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:06.843925 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:06.843949 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:06.913118 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:06.904403   11493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:06.905354   11493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:06.907146   11493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:06.907549   11493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:06.909175   11493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:06.904403   11493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:06.905354   11493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:06.907146   11493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:06.907549   11493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:06.909175   11493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:06.913128 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:06.913140 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:09.481120 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:09.491592 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:09.491658 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:09.518336 1639474 cri.go:89] found id: ""
	I1216 06:40:09.518351 1639474 logs.go:282] 0 containers: []
	W1216 06:40:09.518358 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:09.518363 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:09.518423 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:09.547930 1639474 cri.go:89] found id: ""
	I1216 06:40:09.547943 1639474 logs.go:282] 0 containers: []
	W1216 06:40:09.547950 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:09.547955 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:09.548012 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:09.574921 1639474 cri.go:89] found id: ""
	I1216 06:40:09.574935 1639474 logs.go:282] 0 containers: []
	W1216 06:40:09.574942 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:09.574947 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:09.575008 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:09.600427 1639474 cri.go:89] found id: ""
	I1216 06:40:09.600495 1639474 logs.go:282] 0 containers: []
	W1216 06:40:09.600502 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:09.600508 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:09.600567 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:09.628992 1639474 cri.go:89] found id: ""
	I1216 06:40:09.629006 1639474 logs.go:282] 0 containers: []
	W1216 06:40:09.629015 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:09.629019 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:09.629080 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:09.667383 1639474 cri.go:89] found id: ""
	I1216 06:40:09.667397 1639474 logs.go:282] 0 containers: []
	W1216 06:40:09.667404 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:09.667409 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:09.667468 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:09.710169 1639474 cri.go:89] found id: ""
	I1216 06:40:09.710183 1639474 logs.go:282] 0 containers: []
	W1216 06:40:09.710190 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:09.710197 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:09.710208 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:09.776054 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:09.776075 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:09.790720 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:09.790736 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:09.855182 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:09.847489   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:09.848014   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:09.849514   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:09.849979   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:09.851407   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:09.847489   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:09.848014   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:09.849514   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:09.849979   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:09.851407   11588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:09.855192 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:09.855204 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:09.922382 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:09.922402 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:12.451670 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:12.461890 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:12.461962 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:12.486630 1639474 cri.go:89] found id: ""
	I1216 06:40:12.486644 1639474 logs.go:282] 0 containers: []
	W1216 06:40:12.486650 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:12.486657 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:12.486719 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:12.514531 1639474 cri.go:89] found id: ""
	I1216 06:40:12.514545 1639474 logs.go:282] 0 containers: []
	W1216 06:40:12.514551 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:12.514558 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:12.514621 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:12.541612 1639474 cri.go:89] found id: ""
	I1216 06:40:12.541627 1639474 logs.go:282] 0 containers: []
	W1216 06:40:12.541633 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:12.541638 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:12.541703 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:12.567638 1639474 cri.go:89] found id: ""
	I1216 06:40:12.567652 1639474 logs.go:282] 0 containers: []
	W1216 06:40:12.567659 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:12.567664 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:12.567723 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:12.593074 1639474 cri.go:89] found id: ""
	I1216 06:40:12.593089 1639474 logs.go:282] 0 containers: []
	W1216 06:40:12.593096 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:12.593101 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:12.593164 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:12.621872 1639474 cri.go:89] found id: ""
	I1216 06:40:12.621886 1639474 logs.go:282] 0 containers: []
	W1216 06:40:12.621893 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:12.621898 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:12.621954 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:12.658898 1639474 cri.go:89] found id: ""
	I1216 06:40:12.658912 1639474 logs.go:282] 0 containers: []
	W1216 06:40:12.658919 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:12.658927 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:12.658939 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:12.736529 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:12.727901   11689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:12.728778   11689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:12.730401   11689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:12.730782   11689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:12.732350   11689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:12.727901   11689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:12.728778   11689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:12.730401   11689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:12.730782   11689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:12.732350   11689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:12.736540 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:12.736551 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:12.804860 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:12.804881 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:12.834018 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:12.834036 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:12.903542 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:12.903564 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:15.418582 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:15.428941 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:15.429002 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:15.458081 1639474 cri.go:89] found id: ""
	I1216 06:40:15.458096 1639474 logs.go:282] 0 containers: []
	W1216 06:40:15.458103 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:15.458109 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:15.458172 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:15.487644 1639474 cri.go:89] found id: ""
	I1216 06:40:15.487658 1639474 logs.go:282] 0 containers: []
	W1216 06:40:15.487665 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:15.487670 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:15.487729 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:15.512552 1639474 cri.go:89] found id: ""
	I1216 06:40:15.512565 1639474 logs.go:282] 0 containers: []
	W1216 06:40:15.512572 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:15.512577 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:15.512646 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:15.537944 1639474 cri.go:89] found id: ""
	I1216 06:40:15.537958 1639474 logs.go:282] 0 containers: []
	W1216 06:40:15.537965 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:15.537971 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:15.538030 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:15.574197 1639474 cri.go:89] found id: ""
	I1216 06:40:15.574211 1639474 logs.go:282] 0 containers: []
	W1216 06:40:15.574218 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:15.574223 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:15.574289 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:15.603183 1639474 cri.go:89] found id: ""
	I1216 06:40:15.603197 1639474 logs.go:282] 0 containers: []
	W1216 06:40:15.603204 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:15.603209 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:15.603272 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:15.628682 1639474 cri.go:89] found id: ""
	I1216 06:40:15.628696 1639474 logs.go:282] 0 containers: []
	W1216 06:40:15.628703 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:15.628710 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:15.628720 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:15.716665 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:15.704236   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:15.709021   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:15.710701   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:15.711201   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:15.712773   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:15.704236   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:15.709021   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:15.710701   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:15.711201   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:15.712773   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:15.716676 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:15.716687 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:15.787785 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:15.787806 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:15.815751 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:15.815772 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:15.885879 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:15.885902 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:18.402627 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:18.413143 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:18.413213 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:18.439934 1639474 cri.go:89] found id: ""
	I1216 06:40:18.439948 1639474 logs.go:282] 0 containers: []
	W1216 06:40:18.439956 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:18.439961 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:18.440023 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:18.467477 1639474 cri.go:89] found id: ""
	I1216 06:40:18.467491 1639474 logs.go:282] 0 containers: []
	W1216 06:40:18.467498 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:18.467503 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:18.467564 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:18.492982 1639474 cri.go:89] found id: ""
	I1216 06:40:18.493002 1639474 logs.go:282] 0 containers: []
	W1216 06:40:18.493009 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:18.493013 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:18.493073 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:18.519158 1639474 cri.go:89] found id: ""
	I1216 06:40:18.519173 1639474 logs.go:282] 0 containers: []
	W1216 06:40:18.519180 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:18.519185 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:18.519250 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:18.544672 1639474 cri.go:89] found id: ""
	I1216 06:40:18.544687 1639474 logs.go:282] 0 containers: []
	W1216 06:40:18.544694 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:18.544699 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:18.544760 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:18.574100 1639474 cri.go:89] found id: ""
	I1216 06:40:18.574115 1639474 logs.go:282] 0 containers: []
	W1216 06:40:18.574122 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:18.574127 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:18.574190 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:18.600048 1639474 cri.go:89] found id: ""
	I1216 06:40:18.600062 1639474 logs.go:282] 0 containers: []
	W1216 06:40:18.600069 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:18.600077 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:18.600087 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:18.670680 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:18.670700 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:18.686391 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:18.686408 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:18.756196 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:18.747313   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:18.748097   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:18.749918   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:18.750488   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:18.752058   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:18.747313   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:18.748097   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:18.749918   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:18.750488   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:18.752058   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:18.756206 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:18.756218 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:18.824602 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:18.824623 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:21.356152 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:21.366658 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:21.366719 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:21.391945 1639474 cri.go:89] found id: ""
	I1216 06:40:21.391959 1639474 logs.go:282] 0 containers: []
	W1216 06:40:21.391966 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:21.391971 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:21.392032 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:21.419561 1639474 cri.go:89] found id: ""
	I1216 06:40:21.419581 1639474 logs.go:282] 0 containers: []
	W1216 06:40:21.419588 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:21.419593 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:21.419662 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:21.446105 1639474 cri.go:89] found id: ""
	I1216 06:40:21.446119 1639474 logs.go:282] 0 containers: []
	W1216 06:40:21.446135 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:21.446143 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:21.446212 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:21.472095 1639474 cri.go:89] found id: ""
	I1216 06:40:21.472110 1639474 logs.go:282] 0 containers: []
	W1216 06:40:21.472117 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:21.472123 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:21.472188 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:21.502751 1639474 cri.go:89] found id: ""
	I1216 06:40:21.502766 1639474 logs.go:282] 0 containers: []
	W1216 06:40:21.502773 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:21.502778 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:21.502841 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:21.528514 1639474 cri.go:89] found id: ""
	I1216 06:40:21.528538 1639474 logs.go:282] 0 containers: []
	W1216 06:40:21.528546 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:21.528551 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:21.528623 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:21.554279 1639474 cri.go:89] found id: ""
	I1216 06:40:21.554293 1639474 logs.go:282] 0 containers: []
	W1216 06:40:21.554300 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:21.554308 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:21.554319 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:21.622775 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:21.614774   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:21.615497   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:21.617104   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:21.617588   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:21.618999   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:21.614774   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:21.615497   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:21.617104   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:21.617588   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:21.618999   11999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:21.622786 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:21.622795 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:21.692973 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:21.692993 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:21.722066 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:21.722083 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:21.789953 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:21.789974 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:24.305740 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:24.315908 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:24.315976 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:24.344080 1639474 cri.go:89] found id: ""
	I1216 06:40:24.344095 1639474 logs.go:282] 0 containers: []
	W1216 06:40:24.344102 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:24.344108 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:24.344169 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:24.370623 1639474 cri.go:89] found id: ""
	I1216 06:40:24.370638 1639474 logs.go:282] 0 containers: []
	W1216 06:40:24.370645 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:24.370649 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:24.370714 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:24.397678 1639474 cri.go:89] found id: ""
	I1216 06:40:24.397701 1639474 logs.go:282] 0 containers: []
	W1216 06:40:24.397709 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:24.397714 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:24.397787 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:24.427585 1639474 cri.go:89] found id: ""
	I1216 06:40:24.427599 1639474 logs.go:282] 0 containers: []
	W1216 06:40:24.427607 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:24.427612 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:24.427685 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:24.457451 1639474 cri.go:89] found id: ""
	I1216 06:40:24.457465 1639474 logs.go:282] 0 containers: []
	W1216 06:40:24.457472 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:24.457489 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:24.457562 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:24.483717 1639474 cri.go:89] found id: ""
	I1216 06:40:24.483731 1639474 logs.go:282] 0 containers: []
	W1216 06:40:24.483738 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:24.483743 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:24.483817 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:24.509734 1639474 cri.go:89] found id: ""
	I1216 06:40:24.509748 1639474 logs.go:282] 0 containers: []
	W1216 06:40:24.509756 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:24.509763 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:24.509774 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:24.575490 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:24.575510 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:24.590459 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:24.590476 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:24.660840 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:24.649877   12107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:24.651257   12107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:24.652433   12107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:24.653590   12107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:24.656273   12107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:24.649877   12107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:24.651257   12107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:24.652433   12107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:24.653590   12107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:24.656273   12107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:24.660854 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:24.660865 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:24.742683 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:24.742706 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:27.272978 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:27.283654 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:27.283721 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:27.310045 1639474 cri.go:89] found id: ""
	I1216 06:40:27.310060 1639474 logs.go:282] 0 containers: []
	W1216 06:40:27.310067 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:27.310072 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:27.310132 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:27.339319 1639474 cri.go:89] found id: ""
	I1216 06:40:27.339334 1639474 logs.go:282] 0 containers: []
	W1216 06:40:27.339342 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:27.339347 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:27.339409 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:27.366885 1639474 cri.go:89] found id: ""
	I1216 06:40:27.366901 1639474 logs.go:282] 0 containers: []
	W1216 06:40:27.366910 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:27.366915 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:27.366980 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:27.392968 1639474 cri.go:89] found id: ""
	I1216 06:40:27.392982 1639474 logs.go:282] 0 containers: []
	W1216 06:40:27.392989 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:27.392994 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:27.393072 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:27.425432 1639474 cri.go:89] found id: ""
	I1216 06:40:27.425446 1639474 logs.go:282] 0 containers: []
	W1216 06:40:27.425466 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:27.425471 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:27.425538 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:27.454875 1639474 cri.go:89] found id: ""
	I1216 06:40:27.454899 1639474 logs.go:282] 0 containers: []
	W1216 06:40:27.454906 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:27.454912 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:27.454982 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:27.480348 1639474 cri.go:89] found id: ""
	I1216 06:40:27.480363 1639474 logs.go:282] 0 containers: []
	W1216 06:40:27.480370 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:27.480378 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:27.480389 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:27.550687 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:27.550715 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:27.566692 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:27.566711 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:27.634204 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:27.625961   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:27.626967   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:27.628010   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:27.628680   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:27.630265   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:27.625961   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:27.626967   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:27.628010   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:27.628680   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:27.630265   12215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:27.634214 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:27.634227 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:27.706020 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:27.706040 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:30.238169 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:30.248488 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:30.248550 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:30.274527 1639474 cri.go:89] found id: ""
	I1216 06:40:30.274542 1639474 logs.go:282] 0 containers: []
	W1216 06:40:30.274549 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:30.274554 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:30.274615 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:30.300592 1639474 cri.go:89] found id: ""
	I1216 06:40:30.300610 1639474 logs.go:282] 0 containers: []
	W1216 06:40:30.300617 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:30.300624 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:30.300693 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:30.327939 1639474 cri.go:89] found id: ""
	I1216 06:40:30.327966 1639474 logs.go:282] 0 containers: []
	W1216 06:40:30.327973 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:30.327978 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:30.328040 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:30.358884 1639474 cri.go:89] found id: ""
	I1216 06:40:30.358898 1639474 logs.go:282] 0 containers: []
	W1216 06:40:30.358905 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:30.358910 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:30.358968 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:30.387991 1639474 cri.go:89] found id: ""
	I1216 06:40:30.388005 1639474 logs.go:282] 0 containers: []
	W1216 06:40:30.388012 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:30.388017 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:30.388090 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:30.413034 1639474 cri.go:89] found id: ""
	I1216 06:40:30.413048 1639474 logs.go:282] 0 containers: []
	W1216 06:40:30.413055 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:30.413059 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:30.413118 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:30.449975 1639474 cri.go:89] found id: ""
	I1216 06:40:30.450018 1639474 logs.go:282] 0 containers: []
	W1216 06:40:30.450034 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:30.450041 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:30.450053 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:30.466503 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:30.466521 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:30.528819 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:30.520846   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:30.521380   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:30.522897   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:30.523339   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:30.524879   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:30.520846   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:30.521380   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:30.522897   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:30.523339   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:30.524879   12319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:30.528828 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:30.528839 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:30.597696 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:30.597715 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:30.625300 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:30.625317 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:33.194250 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:33.204305 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:33.204368 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:33.229739 1639474 cri.go:89] found id: ""
	I1216 06:40:33.229753 1639474 logs.go:282] 0 containers: []
	W1216 06:40:33.229760 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:33.229765 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:33.229821 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:33.254131 1639474 cri.go:89] found id: ""
	I1216 06:40:33.254144 1639474 logs.go:282] 0 containers: []
	W1216 06:40:33.254151 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:33.254156 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:33.254214 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:33.279859 1639474 cri.go:89] found id: ""
	I1216 06:40:33.279881 1639474 logs.go:282] 0 containers: []
	W1216 06:40:33.279889 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:33.279894 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:33.279956 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:33.305951 1639474 cri.go:89] found id: ""
	I1216 06:40:33.305966 1639474 logs.go:282] 0 containers: []
	W1216 06:40:33.305973 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:33.305978 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:33.306037 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:33.335767 1639474 cri.go:89] found id: ""
	I1216 06:40:33.335781 1639474 logs.go:282] 0 containers: []
	W1216 06:40:33.335789 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:33.335793 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:33.335859 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:33.362761 1639474 cri.go:89] found id: ""
	I1216 06:40:33.362774 1639474 logs.go:282] 0 containers: []
	W1216 06:40:33.362781 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:33.362786 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:33.362843 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:33.389319 1639474 cri.go:89] found id: ""
	I1216 06:40:33.389334 1639474 logs.go:282] 0 containers: []
	W1216 06:40:33.389340 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:33.389348 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:33.389359 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:33.453913 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:33.444788   12421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:33.445454   12421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:33.447138   12421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:33.447727   12421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:33.449700   12421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:33.444788   12421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:33.445454   12421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:33.447138   12421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:33.447727   12421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:33.449700   12421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:33.453925 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:33.453936 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:33.522875 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:33.522895 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:33.556966 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:33.556981 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:33.624329 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:33.624350 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:36.139596 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:36.150559 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:36.150621 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:36.176931 1639474 cri.go:89] found id: ""
	I1216 06:40:36.176946 1639474 logs.go:282] 0 containers: []
	W1216 06:40:36.176954 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:36.176959 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:36.177023 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:36.203410 1639474 cri.go:89] found id: ""
	I1216 06:40:36.203424 1639474 logs.go:282] 0 containers: []
	W1216 06:40:36.203430 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:36.203435 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:36.203498 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:36.232378 1639474 cri.go:89] found id: ""
	I1216 06:40:36.232393 1639474 logs.go:282] 0 containers: []
	W1216 06:40:36.232399 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:36.232407 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:36.232504 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:36.258614 1639474 cri.go:89] found id: ""
	I1216 06:40:36.258636 1639474 logs.go:282] 0 containers: []
	W1216 06:40:36.258644 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:36.258649 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:36.258711 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:36.287134 1639474 cri.go:89] found id: ""
	I1216 06:40:36.287149 1639474 logs.go:282] 0 containers: []
	W1216 06:40:36.287156 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:36.287161 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:36.287225 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:36.316901 1639474 cri.go:89] found id: ""
	I1216 06:40:36.316915 1639474 logs.go:282] 0 containers: []
	W1216 06:40:36.316922 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:36.316927 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:36.316991 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:36.343964 1639474 cri.go:89] found id: ""
	I1216 06:40:36.343979 1639474 logs.go:282] 0 containers: []
	W1216 06:40:36.343988 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:36.343997 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:36.344009 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:36.409151 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:36.400502   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:36.401298   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:36.402984   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:36.403504   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:36.405133   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:36.400502   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:36.401298   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:36.402984   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:36.403504   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:36.405133   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:36.409161 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:36.409172 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:36.477694 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:36.477717 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:36.507334 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:36.507355 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:36.577747 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:36.577766 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:39.094282 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:39.105025 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:39.105089 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:39.131493 1639474 cri.go:89] found id: ""
	I1216 06:40:39.131507 1639474 logs.go:282] 0 containers: []
	W1216 06:40:39.131514 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:39.131525 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:39.131586 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:39.163796 1639474 cri.go:89] found id: ""
	I1216 06:40:39.163811 1639474 logs.go:282] 0 containers: []
	W1216 06:40:39.163819 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:39.163823 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:39.163886 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:39.191137 1639474 cri.go:89] found id: ""
	I1216 06:40:39.191152 1639474 logs.go:282] 0 containers: []
	W1216 06:40:39.191160 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:39.191165 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:39.191226 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:39.217834 1639474 cri.go:89] found id: ""
	I1216 06:40:39.217850 1639474 logs.go:282] 0 containers: []
	W1216 06:40:39.217857 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:39.217862 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:39.217926 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:39.244937 1639474 cri.go:89] found id: ""
	I1216 06:40:39.244951 1639474 logs.go:282] 0 containers: []
	W1216 06:40:39.244958 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:39.244963 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:39.245026 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:39.274684 1639474 cri.go:89] found id: ""
	I1216 06:40:39.274698 1639474 logs.go:282] 0 containers: []
	W1216 06:40:39.274706 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:39.274711 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:39.274774 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:39.302124 1639474 cri.go:89] found id: ""
	I1216 06:40:39.302138 1639474 logs.go:282] 0 containers: []
	W1216 06:40:39.302145 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:39.302153 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:39.302163 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:39.370146 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:39.370166 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:39.397930 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:39.397946 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:39.469905 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:39.469925 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:39.487153 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:39.487169 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:39.556831 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:39.547994   12655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:39.548793   12655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:39.550966   12655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:39.551537   12655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:39.552926   12655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:39.547994   12655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:39.548793   12655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:39.550966   12655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:39.551537   12655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:39.552926   12655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:42.057113 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:42.068649 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:42.068719 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:42.098202 1639474 cri.go:89] found id: ""
	I1216 06:40:42.098217 1639474 logs.go:282] 0 containers: []
	W1216 06:40:42.098224 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:42.098229 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:42.098294 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:42.130680 1639474 cri.go:89] found id: ""
	I1216 06:40:42.130696 1639474 logs.go:282] 0 containers: []
	W1216 06:40:42.130703 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:42.130708 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:42.130779 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:42.167131 1639474 cri.go:89] found id: ""
	I1216 06:40:42.167146 1639474 logs.go:282] 0 containers: []
	W1216 06:40:42.167153 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:42.167160 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:42.167230 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:42.197324 1639474 cri.go:89] found id: ""
	I1216 06:40:42.197339 1639474 logs.go:282] 0 containers: []
	W1216 06:40:42.197346 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:42.197352 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:42.197420 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:42.225831 1639474 cri.go:89] found id: ""
	I1216 06:40:42.225848 1639474 logs.go:282] 0 containers: []
	W1216 06:40:42.225856 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:42.225861 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:42.225930 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:42.257762 1639474 cri.go:89] found id: ""
	I1216 06:40:42.257777 1639474 logs.go:282] 0 containers: []
	W1216 06:40:42.257786 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:42.257792 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:42.257852 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:42.284492 1639474 cri.go:89] found id: ""
	I1216 06:40:42.284507 1639474 logs.go:282] 0 containers: []
	W1216 06:40:42.284515 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:42.284523 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:42.284535 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:42.351298 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:42.351319 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:42.367176 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:42.367193 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:42.433375 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:42.424339   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:42.425590   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:42.426458   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:42.427469   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:42.429024   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:42.424339   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:42.425590   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:42.426458   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:42.427469   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:42.429024   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:42.433386 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:42.433396 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:42.500708 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:42.500729 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:45.031368 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:45.055503 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:45.055570 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:45.098074 1639474 cri.go:89] found id: ""
	I1216 06:40:45.098091 1639474 logs.go:282] 0 containers: []
	W1216 06:40:45.098100 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:45.098105 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:45.098174 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:45.144642 1639474 cri.go:89] found id: ""
	I1216 06:40:45.144658 1639474 logs.go:282] 0 containers: []
	W1216 06:40:45.144666 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:45.144671 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:45.144743 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:45.177748 1639474 cri.go:89] found id: ""
	I1216 06:40:45.177777 1639474 logs.go:282] 0 containers: []
	W1216 06:40:45.177786 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:45.177792 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:45.177875 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:45.237332 1639474 cri.go:89] found id: ""
	I1216 06:40:45.237350 1639474 logs.go:282] 0 containers: []
	W1216 06:40:45.237368 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:45.237373 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:45.237462 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:45.277580 1639474 cri.go:89] found id: ""
	I1216 06:40:45.277608 1639474 logs.go:282] 0 containers: []
	W1216 06:40:45.277625 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:45.277631 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:45.277787 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:45.319169 1639474 cri.go:89] found id: ""
	I1216 06:40:45.319184 1639474 logs.go:282] 0 containers: []
	W1216 06:40:45.319192 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:45.319198 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:45.319268 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:45.355649 1639474 cri.go:89] found id: ""
	I1216 06:40:45.355663 1639474 logs.go:282] 0 containers: []
	W1216 06:40:45.355672 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:45.355691 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:45.355723 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:45.423762 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:45.423783 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:45.451985 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:45.452002 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:45.516593 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:45.516613 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:45.531478 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:45.531500 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:45.596800 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:45.588341   12868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:45.588774   12868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:45.590507   12868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:45.590979   12868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:45.592493   12868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:45.588341   12868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:45.588774   12868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:45.590507   12868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:45.590979   12868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:45.592493   12868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:48.098483 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:48.108786 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:48.108849 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:48.134211 1639474 cri.go:89] found id: ""
	I1216 06:40:48.134225 1639474 logs.go:282] 0 containers: []
	W1216 06:40:48.134232 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:48.134237 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:48.134297 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:48.160517 1639474 cri.go:89] found id: ""
	I1216 06:40:48.160531 1639474 logs.go:282] 0 containers: []
	W1216 06:40:48.160538 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:48.160544 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:48.160604 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:48.185669 1639474 cri.go:89] found id: ""
	I1216 06:40:48.185682 1639474 logs.go:282] 0 containers: []
	W1216 06:40:48.185690 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:48.185694 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:48.185754 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:48.210265 1639474 cri.go:89] found id: ""
	I1216 06:40:48.210279 1639474 logs.go:282] 0 containers: []
	W1216 06:40:48.210286 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:48.210291 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:48.210403 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:48.234252 1639474 cri.go:89] found id: ""
	I1216 06:40:48.234267 1639474 logs.go:282] 0 containers: []
	W1216 06:40:48.234274 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:48.234279 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:48.234339 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:48.259358 1639474 cri.go:89] found id: ""
	I1216 06:40:48.259372 1639474 logs.go:282] 0 containers: []
	W1216 06:40:48.259379 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:48.259384 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:48.259443 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:48.288697 1639474 cri.go:89] found id: ""
	I1216 06:40:48.288713 1639474 logs.go:282] 0 containers: []
	W1216 06:40:48.288720 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:48.288728 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:48.288738 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:48.357686 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:48.357712 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:48.372954 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:48.372973 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:48.434679 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:48.426723   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:48.427402   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:48.428895   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:48.429341   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:48.430781   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:48.426723   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:48.427402   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:48.428895   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:48.429341   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:48.430781   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:48.434689 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:48.434701 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:48.505103 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:48.505127 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:51.033411 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:51.043540 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:51.043600 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:51.070010 1639474 cri.go:89] found id: ""
	I1216 06:40:51.070025 1639474 logs.go:282] 0 containers: []
	W1216 06:40:51.070032 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:51.070037 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:51.070100 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:51.096267 1639474 cri.go:89] found id: ""
	I1216 06:40:51.096282 1639474 logs.go:282] 0 containers: []
	W1216 06:40:51.096290 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:51.096295 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:51.096356 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:51.122692 1639474 cri.go:89] found id: ""
	I1216 06:40:51.122707 1639474 logs.go:282] 0 containers: []
	W1216 06:40:51.122714 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:51.122719 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:51.122784 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:51.152647 1639474 cri.go:89] found id: ""
	I1216 06:40:51.152662 1639474 logs.go:282] 0 containers: []
	W1216 06:40:51.152670 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:51.152680 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:51.152744 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:51.180574 1639474 cri.go:89] found id: ""
	I1216 06:40:51.180589 1639474 logs.go:282] 0 containers: []
	W1216 06:40:51.180597 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:51.180602 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:51.180668 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:51.206605 1639474 cri.go:89] found id: ""
	I1216 06:40:51.206619 1639474 logs.go:282] 0 containers: []
	W1216 06:40:51.206626 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:51.206631 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:51.206695 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:51.231786 1639474 cri.go:89] found id: ""
	I1216 06:40:51.231809 1639474 logs.go:282] 0 containers: []
	W1216 06:40:51.231817 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:51.231825 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:51.231835 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:51.297100 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:51.297120 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:51.311954 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:51.311972 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:51.379683 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:51.371735   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:51.372335   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:51.373907   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:51.374265   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:51.375750   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:51.371735   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:51.372335   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:51.373907   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:51.374265   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:51.375750   13071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:51.379694 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:51.379706 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:51.447537 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:51.447557 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:53.983520 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:53.993929 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:53.993987 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:54.023619 1639474 cri.go:89] found id: ""
	I1216 06:40:54.023634 1639474 logs.go:282] 0 containers: []
	W1216 06:40:54.023640 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:54.023645 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:54.023708 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:54.049842 1639474 cri.go:89] found id: ""
	I1216 06:40:54.049857 1639474 logs.go:282] 0 containers: []
	W1216 06:40:54.049864 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:54.049869 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:54.049934 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:54.077181 1639474 cri.go:89] found id: ""
	I1216 06:40:54.077205 1639474 logs.go:282] 0 containers: []
	W1216 06:40:54.077212 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:54.077217 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:54.077280 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:54.105267 1639474 cri.go:89] found id: ""
	I1216 06:40:54.105282 1639474 logs.go:282] 0 containers: []
	W1216 06:40:54.105291 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:54.105297 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:54.105363 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:54.130851 1639474 cri.go:89] found id: ""
	I1216 06:40:54.130874 1639474 logs.go:282] 0 containers: []
	W1216 06:40:54.130881 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:54.130886 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:54.130949 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:54.156895 1639474 cri.go:89] found id: ""
	I1216 06:40:54.156910 1639474 logs.go:282] 0 containers: []
	W1216 06:40:54.156917 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:54.156923 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:54.156983 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:54.183545 1639474 cri.go:89] found id: ""
	I1216 06:40:54.183560 1639474 logs.go:282] 0 containers: []
	W1216 06:40:54.183566 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:54.183574 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:54.183584 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:54.249489 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:54.249509 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:54.263930 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:54.263947 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:54.329743 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:54.321698   13175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:54.322538   13175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:54.324144   13175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:54.324622   13175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:54.326115   13175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:54.321698   13175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:54.322538   13175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:54.324144   13175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:54.324622   13175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:54.326115   13175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:54.329755 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:54.329766 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:54.396582 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:54.396603 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:56.928591 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:56.939856 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:56.939917 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:56.967210 1639474 cri.go:89] found id: ""
	I1216 06:40:56.967225 1639474 logs.go:282] 0 containers: []
	W1216 06:40:56.967232 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:56.967237 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:56.967298 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:56.993815 1639474 cri.go:89] found id: ""
	I1216 06:40:56.993829 1639474 logs.go:282] 0 containers: []
	W1216 06:40:56.993836 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:56.993841 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:56.993898 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:57.029670 1639474 cri.go:89] found id: ""
	I1216 06:40:57.029684 1639474 logs.go:282] 0 containers: []
	W1216 06:40:57.029691 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:57.029696 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:57.029754 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:57.054833 1639474 cri.go:89] found id: ""
	I1216 06:40:57.054847 1639474 logs.go:282] 0 containers: []
	W1216 06:40:57.054854 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:57.054859 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:57.054924 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:40:57.079670 1639474 cri.go:89] found id: ""
	I1216 06:40:57.079684 1639474 logs.go:282] 0 containers: []
	W1216 06:40:57.079691 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:40:57.079696 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:40:57.079761 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:40:57.104048 1639474 cri.go:89] found id: ""
	I1216 06:40:57.104062 1639474 logs.go:282] 0 containers: []
	W1216 06:40:57.104069 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:40:57.104074 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:40:57.104142 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:40:57.129442 1639474 cri.go:89] found id: ""
	I1216 06:40:57.129462 1639474 logs.go:282] 0 containers: []
	W1216 06:40:57.129469 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:40:57.129477 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:40:57.129487 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:40:57.197165 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:40:57.197185 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:40:57.226479 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:40:57.226498 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:40:57.292031 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:40:57.292053 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:40:57.306889 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:40:57.306905 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:40:57.372214 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:40:57.363236   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:57.363924   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:57.365612   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:57.366208   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:57.367883   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:40:57.363236   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:57.363924   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:57.365612   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:57.366208   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:40:57.367883   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:40:59.872521 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:40:59.882455 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:40:59.882521 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:40:59.913998 1639474 cri.go:89] found id: ""
	I1216 06:40:59.914012 1639474 logs.go:282] 0 containers: []
	W1216 06:40:59.914020 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:40:59.914025 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:40:59.914091 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:40:59.942569 1639474 cri.go:89] found id: ""
	I1216 06:40:59.942583 1639474 logs.go:282] 0 containers: []
	W1216 06:40:59.942589 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:40:59.942594 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:40:59.942665 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:40:59.970700 1639474 cri.go:89] found id: ""
	I1216 06:40:59.970729 1639474 logs.go:282] 0 containers: []
	W1216 06:40:59.970736 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:40:59.970742 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:40:59.970809 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:40:59.997067 1639474 cri.go:89] found id: ""
	I1216 06:40:59.997085 1639474 logs.go:282] 0 containers: []
	W1216 06:40:59.997092 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:40:59.997098 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:40:59.997163 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:00.191988 1639474 cri.go:89] found id: ""
	I1216 06:41:00.192005 1639474 logs.go:282] 0 containers: []
	W1216 06:41:00.192013 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:00.192018 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:00.192086 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:00.277353 1639474 cri.go:89] found id: ""
	I1216 06:41:00.277369 1639474 logs.go:282] 0 containers: []
	W1216 06:41:00.277377 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:00.277382 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:00.277497 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:00.317655 1639474 cri.go:89] found id: ""
	I1216 06:41:00.317680 1639474 logs.go:282] 0 containers: []
	W1216 06:41:00.317688 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:00.317697 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:00.317710 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:00.373222 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:00.373244 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:00.450289 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:00.450312 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:00.467305 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:00.467321 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:00.537520 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:00.528959   13394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:00.529630   13394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:00.531328   13394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:00.531879   13394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:00.533548   13394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:00.528959   13394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:00.529630   13394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:00.531328   13394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:00.531879   13394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:00.533548   13394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:00.537529 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:00.537544 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:03.105837 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:03.116211 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:03.116271 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:03.140992 1639474 cri.go:89] found id: ""
	I1216 06:41:03.141005 1639474 logs.go:282] 0 containers: []
	W1216 06:41:03.141013 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:03.141018 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:03.141077 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:03.169832 1639474 cri.go:89] found id: ""
	I1216 06:41:03.169846 1639474 logs.go:282] 0 containers: []
	W1216 06:41:03.169853 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:03.169858 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:03.169923 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:03.200294 1639474 cri.go:89] found id: ""
	I1216 06:41:03.200308 1639474 logs.go:282] 0 containers: []
	W1216 06:41:03.200316 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:03.200321 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:03.200422 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:03.226615 1639474 cri.go:89] found id: ""
	I1216 06:41:03.226629 1639474 logs.go:282] 0 containers: []
	W1216 06:41:03.226635 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:03.226641 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:03.226702 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:03.252099 1639474 cri.go:89] found id: ""
	I1216 06:41:03.252113 1639474 logs.go:282] 0 containers: []
	W1216 06:41:03.252120 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:03.252125 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:03.252186 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:03.277049 1639474 cri.go:89] found id: ""
	I1216 06:41:03.277064 1639474 logs.go:282] 0 containers: []
	W1216 06:41:03.277070 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:03.277075 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:03.277136 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:03.302834 1639474 cri.go:89] found id: ""
	I1216 06:41:03.302850 1639474 logs.go:282] 0 containers: []
	W1216 06:41:03.302857 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:03.302865 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:03.302877 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:03.369696 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:03.369719 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:03.384336 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:03.384358 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:03.450962 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:03.442704   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:03.443315   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:03.445009   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:03.445434   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:03.446924   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:03.442704   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:03.443315   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:03.445009   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:03.445434   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:03.446924   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:03.450973 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:03.450985 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:03.522274 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:03.522297 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:06.053196 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:06.063351 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:06.063422 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:06.089075 1639474 cri.go:89] found id: ""
	I1216 06:41:06.089089 1639474 logs.go:282] 0 containers: []
	W1216 06:41:06.089096 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:06.089102 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:06.089162 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:06.118245 1639474 cri.go:89] found id: ""
	I1216 06:41:06.118259 1639474 logs.go:282] 0 containers: []
	W1216 06:41:06.118266 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:06.118271 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:06.118336 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:06.143697 1639474 cri.go:89] found id: ""
	I1216 06:41:06.143724 1639474 logs.go:282] 0 containers: []
	W1216 06:41:06.143732 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:06.143737 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:06.143805 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:06.169572 1639474 cri.go:89] found id: ""
	I1216 06:41:06.169586 1639474 logs.go:282] 0 containers: []
	W1216 06:41:06.169594 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:06.169599 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:06.169661 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:06.195851 1639474 cri.go:89] found id: ""
	I1216 06:41:06.195867 1639474 logs.go:282] 0 containers: []
	W1216 06:41:06.195874 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:06.195879 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:06.195942 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:06.223692 1639474 cri.go:89] found id: ""
	I1216 06:41:06.223707 1639474 logs.go:282] 0 containers: []
	W1216 06:41:06.223715 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:06.223720 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:06.223780 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:06.249649 1639474 cri.go:89] found id: ""
	I1216 06:41:06.249679 1639474 logs.go:282] 0 containers: []
	W1216 06:41:06.249686 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:06.249694 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:06.249705 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:06.314738 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:06.314759 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:06.329678 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:06.329695 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:06.395023 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:06.386200   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:06.387084   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:06.388942   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:06.389302   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:06.390896   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:06.386200   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:06.387084   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:06.388942   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:06.389302   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:06.390896   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:06.395034 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:06.395046 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:06.463667 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:06.463687 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:08.992603 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:09.003856 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:09.003937 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:09.031578 1639474 cri.go:89] found id: ""
	I1216 06:41:09.031592 1639474 logs.go:282] 0 containers: []
	W1216 06:41:09.031599 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:09.031604 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:09.031663 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:09.056946 1639474 cri.go:89] found id: ""
	I1216 06:41:09.056961 1639474 logs.go:282] 0 containers: []
	W1216 06:41:09.056969 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:09.056974 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:09.057035 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:09.082038 1639474 cri.go:89] found id: ""
	I1216 06:41:09.082053 1639474 logs.go:282] 0 containers: []
	W1216 06:41:09.082060 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:09.082065 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:09.082125 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:09.107847 1639474 cri.go:89] found id: ""
	I1216 06:41:09.107862 1639474 logs.go:282] 0 containers: []
	W1216 06:41:09.107869 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:09.107874 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:09.107933 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:09.133995 1639474 cri.go:89] found id: ""
	I1216 06:41:09.134010 1639474 logs.go:282] 0 containers: []
	W1216 06:41:09.134017 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:09.134022 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:09.134086 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:09.159110 1639474 cri.go:89] found id: ""
	I1216 06:41:09.159125 1639474 logs.go:282] 0 containers: []
	W1216 06:41:09.159132 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:09.159137 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:09.159197 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:09.189150 1639474 cri.go:89] found id: ""
	I1216 06:41:09.189164 1639474 logs.go:282] 0 containers: []
	W1216 06:41:09.189171 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:09.189179 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:09.189190 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:09.251080 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:09.242202   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:09.242596   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:09.244208   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:09.244880   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:09.246572   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:09.242202   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:09.242596   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:09.244208   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:09.244880   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:09.246572   13689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:09.251090 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:09.251102 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:09.318859 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:09.318879 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:09.349358 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:09.349381 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:09.418362 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:09.418385 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:11.933431 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:11.944248 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:11.944309 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:11.976909 1639474 cri.go:89] found id: ""
	I1216 06:41:11.976924 1639474 logs.go:282] 0 containers: []
	W1216 06:41:11.976932 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:11.976937 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:11.976998 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:12.011035 1639474 cri.go:89] found id: ""
	I1216 06:41:12.011050 1639474 logs.go:282] 0 containers: []
	W1216 06:41:12.011057 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:12.011062 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:12.011126 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:12.041473 1639474 cri.go:89] found id: ""
	I1216 06:41:12.041495 1639474 logs.go:282] 0 containers: []
	W1216 06:41:12.041502 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:12.041508 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:12.041571 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:12.066438 1639474 cri.go:89] found id: ""
	I1216 06:41:12.066463 1639474 logs.go:282] 0 containers: []
	W1216 06:41:12.066471 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:12.066477 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:12.066542 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:12.090884 1639474 cri.go:89] found id: ""
	I1216 06:41:12.090899 1639474 logs.go:282] 0 containers: []
	W1216 06:41:12.090906 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:12.090911 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:12.090970 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:12.116491 1639474 cri.go:89] found id: ""
	I1216 06:41:12.116506 1639474 logs.go:282] 0 containers: []
	W1216 06:41:12.116516 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:12.116522 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:12.116580 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:12.142941 1639474 cri.go:89] found id: ""
	I1216 06:41:12.142956 1639474 logs.go:282] 0 containers: []
	W1216 06:41:12.142963 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:12.142971 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:12.142982 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:12.172125 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:12.172142 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:12.240713 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:12.240734 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:12.255672 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:12.255689 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:12.321167 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:12.312200   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:12.313096   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:12.315001   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:12.315663   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:12.317261   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:12.312200   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:12.313096   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:12.315001   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:12.315663   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:12.317261   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:12.321177 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:12.321190 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:14.894286 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:14.904324 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:14.904383 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:14.938397 1639474 cri.go:89] found id: ""
	I1216 06:41:14.938421 1639474 logs.go:282] 0 containers: []
	W1216 06:41:14.938429 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:14.938434 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:14.938501 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:14.967116 1639474 cri.go:89] found id: ""
	I1216 06:41:14.967130 1639474 logs.go:282] 0 containers: []
	W1216 06:41:14.967137 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:14.967141 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:14.967203 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:14.993300 1639474 cri.go:89] found id: ""
	I1216 06:41:14.993324 1639474 logs.go:282] 0 containers: []
	W1216 06:41:14.993331 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:14.993336 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:14.993414 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:15.065324 1639474 cri.go:89] found id: ""
	I1216 06:41:15.065347 1639474 logs.go:282] 0 containers: []
	W1216 06:41:15.065374 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:15.065379 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:15.065453 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:15.094230 1639474 cri.go:89] found id: ""
	I1216 06:41:15.094254 1639474 logs.go:282] 0 containers: []
	W1216 06:41:15.094262 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:15.094268 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:15.094334 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:15.125543 1639474 cri.go:89] found id: ""
	I1216 06:41:15.125557 1639474 logs.go:282] 0 containers: []
	W1216 06:41:15.125567 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:15.125574 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:15.125641 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:15.153256 1639474 cri.go:89] found id: ""
	I1216 06:41:15.153271 1639474 logs.go:282] 0 containers: []
	W1216 06:41:15.153280 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:15.153287 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:15.153298 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:15.220613 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:15.220633 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:15.235620 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:15.235637 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:15.298217 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:15.289454   13906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:15.290253   13906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:15.291923   13906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:15.292609   13906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:15.294226   13906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:15.289454   13906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:15.290253   13906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:15.291923   13906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:15.292609   13906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:15.294226   13906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:15.298227 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:15.298238 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:15.366620 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:15.366643 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:17.896595 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:17.908386 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:17.908446 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:17.937743 1639474 cri.go:89] found id: ""
	I1216 06:41:17.937757 1639474 logs.go:282] 0 containers: []
	W1216 06:41:17.937763 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:17.937768 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:17.937827 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:17.970369 1639474 cri.go:89] found id: ""
	I1216 06:41:17.970383 1639474 logs.go:282] 0 containers: []
	W1216 06:41:17.970390 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:17.970395 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:17.970453 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:17.996832 1639474 cri.go:89] found id: ""
	I1216 06:41:17.996846 1639474 logs.go:282] 0 containers: []
	W1216 06:41:17.996853 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:17.996858 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:17.996924 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:18.038145 1639474 cri.go:89] found id: ""
	I1216 06:41:18.038159 1639474 logs.go:282] 0 containers: []
	W1216 06:41:18.038167 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:18.038172 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:18.038235 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:18.064225 1639474 cri.go:89] found id: ""
	I1216 06:41:18.064239 1639474 logs.go:282] 0 containers: []
	W1216 06:41:18.064248 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:18.064254 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:18.064314 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:18.094775 1639474 cri.go:89] found id: ""
	I1216 06:41:18.094789 1639474 logs.go:282] 0 containers: []
	W1216 06:41:18.094797 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:18.094802 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:18.094863 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:18.120874 1639474 cri.go:89] found id: ""
	I1216 06:41:18.120888 1639474 logs.go:282] 0 containers: []
	W1216 06:41:18.120895 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:18.120903 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:18.120913 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:18.188407 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:18.188429 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:18.221279 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:18.221295 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:18.288107 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:18.288129 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:18.303324 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:18.303342 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:18.371049 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:18.362924   14025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:18.363610   14025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:18.365170   14025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:18.365582   14025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:18.367111   14025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:18.362924   14025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:18.363610   14025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:18.365170   14025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:18.365582   14025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:18.367111   14025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:20.871320 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:20.881458 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:20.881519 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:20.910690 1639474 cri.go:89] found id: ""
	I1216 06:41:20.910704 1639474 logs.go:282] 0 containers: []
	W1216 06:41:20.910711 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:20.910716 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:20.910778 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:20.940115 1639474 cri.go:89] found id: ""
	I1216 06:41:20.940131 1639474 logs.go:282] 0 containers: []
	W1216 06:41:20.940138 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:20.940144 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:20.940205 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:20.971890 1639474 cri.go:89] found id: ""
	I1216 06:41:20.971904 1639474 logs.go:282] 0 containers: []
	W1216 06:41:20.971911 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:20.971916 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:20.971973 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:20.997611 1639474 cri.go:89] found id: ""
	I1216 06:41:20.997627 1639474 logs.go:282] 0 containers: []
	W1216 06:41:20.997634 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:20.997639 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:20.997714 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:21.028905 1639474 cri.go:89] found id: ""
	I1216 06:41:21.028919 1639474 logs.go:282] 0 containers: []
	W1216 06:41:21.028926 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:21.028931 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:21.028990 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:21.055176 1639474 cri.go:89] found id: ""
	I1216 06:41:21.055190 1639474 logs.go:282] 0 containers: []
	W1216 06:41:21.055197 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:21.055202 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:21.055262 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:21.081697 1639474 cri.go:89] found id: ""
	I1216 06:41:21.081712 1639474 logs.go:282] 0 containers: []
	W1216 06:41:21.081719 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:21.081727 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:21.081738 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:21.148234 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:21.148255 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:21.164172 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:21.164192 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:21.228352 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:21.219814   14118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:21.220709   14118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:21.222449   14118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:21.222766   14118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:21.224337   14118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:21.219814   14118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:21.220709   14118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:21.222449   14118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:21.222766   14118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:21.224337   14118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:21.228362 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:21.228374 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:21.295358 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:21.295378 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:23.826021 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:23.836732 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:23.836794 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:23.865987 1639474 cri.go:89] found id: ""
	I1216 06:41:23.866001 1639474 logs.go:282] 0 containers: []
	W1216 06:41:23.866008 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:23.866013 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:23.866073 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:23.891393 1639474 cri.go:89] found id: ""
	I1216 06:41:23.891408 1639474 logs.go:282] 0 containers: []
	W1216 06:41:23.891415 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:23.891420 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:23.891486 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:23.918388 1639474 cri.go:89] found id: ""
	I1216 06:41:23.918403 1639474 logs.go:282] 0 containers: []
	W1216 06:41:23.918410 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:23.918415 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:23.918475 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:23.961374 1639474 cri.go:89] found id: ""
	I1216 06:41:23.961390 1639474 logs.go:282] 0 containers: []
	W1216 06:41:23.961397 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:23.961402 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:23.961461 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:23.987162 1639474 cri.go:89] found id: ""
	I1216 06:41:23.987176 1639474 logs.go:282] 0 containers: []
	W1216 06:41:23.987184 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:23.987195 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:23.987257 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:24.016111 1639474 cri.go:89] found id: ""
	I1216 06:41:24.016127 1639474 logs.go:282] 0 containers: []
	W1216 06:41:24.016134 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:24.016139 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:24.016202 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:24.043481 1639474 cri.go:89] found id: ""
	I1216 06:41:24.043495 1639474 logs.go:282] 0 containers: []
	W1216 06:41:24.043503 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:24.043511 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:24.043521 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:24.111316 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:24.102100   14216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:24.103028   14216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:24.105013   14216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:24.105610   14216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:24.107298   14216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:24.102100   14216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:24.103028   14216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:24.105013   14216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:24.105610   14216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:24.107298   14216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:24.111326 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:24.111338 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:24.178630 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:24.178650 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:24.213388 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:24.213405 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:24.283269 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:24.283290 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:26.798616 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:26.808720 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:26.808786 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:26.834419 1639474 cri.go:89] found id: ""
	I1216 06:41:26.834433 1639474 logs.go:282] 0 containers: []
	W1216 06:41:26.834451 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:26.834457 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:26.834530 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:26.860230 1639474 cri.go:89] found id: ""
	I1216 06:41:26.860244 1639474 logs.go:282] 0 containers: []
	W1216 06:41:26.860251 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:26.860256 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:26.860316 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:26.886841 1639474 cri.go:89] found id: ""
	I1216 06:41:26.886856 1639474 logs.go:282] 0 containers: []
	W1216 06:41:26.886863 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:26.886868 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:26.886934 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:26.933097 1639474 cri.go:89] found id: ""
	I1216 06:41:26.933121 1639474 logs.go:282] 0 containers: []
	W1216 06:41:26.933129 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:26.933134 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:26.933201 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:26.967219 1639474 cri.go:89] found id: ""
	I1216 06:41:26.967233 1639474 logs.go:282] 0 containers: []
	W1216 06:41:26.967241 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:26.967258 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:26.967319 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:27.008045 1639474 cri.go:89] found id: ""
	I1216 06:41:27.008074 1639474 logs.go:282] 0 containers: []
	W1216 06:41:27.008082 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:27.008088 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:27.008156 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:27.034453 1639474 cri.go:89] found id: ""
	I1216 06:41:27.034469 1639474 logs.go:282] 0 containers: []
	W1216 06:41:27.034476 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:27.034484 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:27.034507 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:27.104223 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:27.104245 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:27.119468 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:27.119487 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:27.188973 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:27.180080   14327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:27.181032   14327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:27.182948   14327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:27.183274   14327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:27.184949   14327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:27.180080   14327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:27.181032   14327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:27.182948   14327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:27.183274   14327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:27.184949   14327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:27.188983 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:27.188994 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:27.258008 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:27.258028 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:29.786955 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:29.797122 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:29.797184 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:29.824207 1639474 cri.go:89] found id: ""
	I1216 06:41:29.824221 1639474 logs.go:282] 0 containers: []
	W1216 06:41:29.824228 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:29.824233 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:29.824290 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:29.850615 1639474 cri.go:89] found id: ""
	I1216 06:41:29.850630 1639474 logs.go:282] 0 containers: []
	W1216 06:41:29.850636 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:29.850641 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:29.850703 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:29.876387 1639474 cri.go:89] found id: ""
	I1216 06:41:29.876401 1639474 logs.go:282] 0 containers: []
	W1216 06:41:29.876408 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:29.876413 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:29.876498 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:29.907653 1639474 cri.go:89] found id: ""
	I1216 06:41:29.907667 1639474 logs.go:282] 0 containers: []
	W1216 06:41:29.907674 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:29.907678 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:29.907735 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:29.944219 1639474 cri.go:89] found id: ""
	I1216 06:41:29.944233 1639474 logs.go:282] 0 containers: []
	W1216 06:41:29.944239 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:29.944244 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:29.944302 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:29.976007 1639474 cri.go:89] found id: ""
	I1216 06:41:29.976021 1639474 logs.go:282] 0 containers: []
	W1216 06:41:29.976029 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:29.976033 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:29.976095 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:30.024272 1639474 cri.go:89] found id: ""
	I1216 06:41:30.024289 1639474 logs.go:282] 0 containers: []
	W1216 06:41:30.024297 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:30.024306 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:30.024322 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:30.119806 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:30.119827 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:30.136379 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:30.136400 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:30.205690 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:30.196345   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:30.197016   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:30.198788   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:30.199535   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:30.201508   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:30.196345   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:30.197016   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:30.198788   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:30.199535   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:30.201508   14433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:30.205700 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:30.205723 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:30.274216 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:30.274240 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:32.809139 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:32.819371 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:32.819431 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:32.847039 1639474 cri.go:89] found id: ""
	I1216 06:41:32.847054 1639474 logs.go:282] 0 containers: []
	W1216 06:41:32.847065 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:32.847070 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:32.847138 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:32.875215 1639474 cri.go:89] found id: ""
	I1216 06:41:32.875229 1639474 logs.go:282] 0 containers: []
	W1216 06:41:32.875236 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:32.875240 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:32.875300 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:32.907300 1639474 cri.go:89] found id: ""
	I1216 06:41:32.907314 1639474 logs.go:282] 0 containers: []
	W1216 06:41:32.907321 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:32.907326 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:32.907381 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:32.938860 1639474 cri.go:89] found id: ""
	I1216 06:41:32.938874 1639474 logs.go:282] 0 containers: []
	W1216 06:41:32.938881 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:32.938886 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:32.938942 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:32.971352 1639474 cri.go:89] found id: ""
	I1216 06:41:32.971366 1639474 logs.go:282] 0 containers: []
	W1216 06:41:32.971374 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:32.971379 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:32.971436 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:33.012516 1639474 cri.go:89] found id: ""
	I1216 06:41:33.012531 1639474 logs.go:282] 0 containers: []
	W1216 06:41:33.012538 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:33.012543 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:33.012622 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:33.041830 1639474 cri.go:89] found id: ""
	I1216 06:41:33.041844 1639474 logs.go:282] 0 containers: []
	W1216 06:41:33.041851 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:33.041859 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:33.041869 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:33.107636 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:33.107656 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:33.122787 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:33.122803 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:33.191649 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:33.182880   14537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:33.183594   14537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:33.185187   14537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:33.185934   14537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:33.187632   14537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:33.182880   14537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:33.183594   14537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:33.185187   14537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:33.185934   14537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:33.187632   14537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:33.191659 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:33.191682 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:33.263447 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:33.263474 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:35.794998 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:35.805176 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:35.805236 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:35.831135 1639474 cri.go:89] found id: ""
	I1216 06:41:35.831149 1639474 logs.go:282] 0 containers: []
	W1216 06:41:35.831156 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:35.831161 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:35.831223 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:35.860254 1639474 cri.go:89] found id: ""
	I1216 06:41:35.860281 1639474 logs.go:282] 0 containers: []
	W1216 06:41:35.860289 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:35.860294 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:35.860360 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:35.887306 1639474 cri.go:89] found id: ""
	I1216 06:41:35.887320 1639474 logs.go:282] 0 containers: []
	W1216 06:41:35.887327 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:35.887333 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:35.887391 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:35.917653 1639474 cri.go:89] found id: ""
	I1216 06:41:35.917668 1639474 logs.go:282] 0 containers: []
	W1216 06:41:35.917690 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:35.917696 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:35.917763 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:35.959523 1639474 cri.go:89] found id: ""
	I1216 06:41:35.959546 1639474 logs.go:282] 0 containers: []
	W1216 06:41:35.959553 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:35.959558 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:35.959629 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:35.989044 1639474 cri.go:89] found id: ""
	I1216 06:41:35.989062 1639474 logs.go:282] 0 containers: []
	W1216 06:41:35.989069 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:35.989077 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:35.989138 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:36.024859 1639474 cri.go:89] found id: ""
	I1216 06:41:36.024875 1639474 logs.go:282] 0 containers: []
	W1216 06:41:36.024885 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:36.024895 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:36.024912 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:36.056878 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:36.056896 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:36.121811 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:36.121834 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:36.137437 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:36.137455 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:36.205908 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:36.196720   14657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:36.197549   14657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:36.199375   14657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:36.199759   14657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:36.201454   14657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:36.196720   14657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:36.197549   14657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:36.199375   14657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:36.199759   14657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:36.201454   14657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:36.205920 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:36.205931 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:38.776930 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:38.786842 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:38.786902 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:38.812622 1639474 cri.go:89] found id: ""
	I1216 06:41:38.812637 1639474 logs.go:282] 0 containers: []
	W1216 06:41:38.812644 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:38.812649 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:38.812705 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:38.838434 1639474 cri.go:89] found id: ""
	I1216 06:41:38.838448 1639474 logs.go:282] 0 containers: []
	W1216 06:41:38.838456 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:38.838461 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:38.838523 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:38.863392 1639474 cri.go:89] found id: ""
	I1216 06:41:38.863407 1639474 logs.go:282] 0 containers: []
	W1216 06:41:38.863414 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:38.863419 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:38.863479 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:38.888908 1639474 cri.go:89] found id: ""
	I1216 06:41:38.888922 1639474 logs.go:282] 0 containers: []
	W1216 06:41:38.888929 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:38.888934 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:38.888993 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:38.917217 1639474 cri.go:89] found id: ""
	I1216 06:41:38.917247 1639474 logs.go:282] 0 containers: []
	W1216 06:41:38.917255 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:38.917260 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:38.917340 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:38.951610 1639474 cri.go:89] found id: ""
	I1216 06:41:38.951623 1639474 logs.go:282] 0 containers: []
	W1216 06:41:38.951630 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:38.951645 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:38.951706 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:38.982144 1639474 cri.go:89] found id: ""
	I1216 06:41:38.982158 1639474 logs.go:282] 0 containers: []
	W1216 06:41:38.982165 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:38.982173 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:38.982184 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:39.051829 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:39.043703   14748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:39.044349   14748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:39.045933   14748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:39.046368   14748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:39.047868   14748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:39.043703   14748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:39.044349   14748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:39.045933   14748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:39.046368   14748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:39.047868   14748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:39.051839 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:39.051860 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:39.125701 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:39.125723 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:39.157087 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:39.157104 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:39.225477 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:39.225498 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:41.740919 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:41.751149 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:41.751211 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:41.776245 1639474 cri.go:89] found id: ""
	I1216 06:41:41.776259 1639474 logs.go:282] 0 containers: []
	W1216 06:41:41.776266 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:41.776271 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:41.776330 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:41.801530 1639474 cri.go:89] found id: ""
	I1216 06:41:41.801543 1639474 logs.go:282] 0 containers: []
	W1216 06:41:41.801556 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:41.801561 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:41.801619 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:41.826287 1639474 cri.go:89] found id: ""
	I1216 06:41:41.826300 1639474 logs.go:282] 0 containers: []
	W1216 06:41:41.826307 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:41.826312 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:41.826368 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:41.855404 1639474 cri.go:89] found id: ""
	I1216 06:41:41.855419 1639474 logs.go:282] 0 containers: []
	W1216 06:41:41.855426 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:41.855431 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:41.855490 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:41.883079 1639474 cri.go:89] found id: ""
	I1216 06:41:41.883093 1639474 logs.go:282] 0 containers: []
	W1216 06:41:41.883100 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:41.883104 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:41.883162 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:41.924362 1639474 cri.go:89] found id: ""
	I1216 06:41:41.924376 1639474 logs.go:282] 0 containers: []
	W1216 06:41:41.924393 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:41.924399 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:41.924503 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:41.958054 1639474 cri.go:89] found id: ""
	I1216 06:41:41.958069 1639474 logs.go:282] 0 containers: []
	W1216 06:41:41.958076 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:41.958083 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:41.958093 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:42.031093 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:42.022513   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:42.023465   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:42.024526   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:42.025029   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:42.026770   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:42.022513   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:42.023465   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:42.024526   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:42.025029   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:42.026770   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:42.031104 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:42.031117 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:42.098938 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:42.098961 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:42.132662 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:42.132681 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:42.206635 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:42.206658 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:44.725533 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:44.735690 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:44.735751 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:44.764539 1639474 cri.go:89] found id: ""
	I1216 06:41:44.764554 1639474 logs.go:282] 0 containers: []
	W1216 06:41:44.764561 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:44.764566 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:44.764624 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:44.789462 1639474 cri.go:89] found id: ""
	I1216 06:41:44.789476 1639474 logs.go:282] 0 containers: []
	W1216 06:41:44.789483 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:44.789487 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:44.789550 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:44.813863 1639474 cri.go:89] found id: ""
	I1216 06:41:44.813877 1639474 logs.go:282] 0 containers: []
	W1216 06:41:44.813884 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:44.813889 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:44.813948 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:44.842990 1639474 cri.go:89] found id: ""
	I1216 06:41:44.843006 1639474 logs.go:282] 0 containers: []
	W1216 06:41:44.843013 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:44.843018 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:44.843076 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:44.868986 1639474 cri.go:89] found id: ""
	I1216 06:41:44.869000 1639474 logs.go:282] 0 containers: []
	W1216 06:41:44.869006 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:44.869013 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:44.869070 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:44.897735 1639474 cri.go:89] found id: ""
	I1216 06:41:44.897759 1639474 logs.go:282] 0 containers: []
	W1216 06:41:44.897767 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:44.897773 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:44.897840 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:44.927690 1639474 cri.go:89] found id: ""
	I1216 06:41:44.927715 1639474 logs.go:282] 0 containers: []
	W1216 06:41:44.927722 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:44.927730 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:44.927740 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:45.002166 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:45.002190 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:45.029027 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:45.029047 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:45.167411 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:45.147237   14960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:45.148177   14960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:45.151868   14960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:45.153460   14960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:45.154056   14960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:45.147237   14960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:45.148177   14960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:45.151868   14960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:45.153460   14960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:45.154056   14960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:45.167428 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:45.167448 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:45.247049 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:45.247076 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:47.787199 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:47.797629 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:47.797694 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:47.822803 1639474 cri.go:89] found id: ""
	I1216 06:41:47.822818 1639474 logs.go:282] 0 containers: []
	W1216 06:41:47.822825 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:47.822830 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:47.822894 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:47.848082 1639474 cri.go:89] found id: ""
	I1216 06:41:47.848109 1639474 logs.go:282] 0 containers: []
	W1216 06:41:47.848117 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:47.848122 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:47.848199 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:47.874407 1639474 cri.go:89] found id: ""
	I1216 06:41:47.874421 1639474 logs.go:282] 0 containers: []
	W1216 06:41:47.874428 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:47.874434 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:47.874495 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:47.908568 1639474 cri.go:89] found id: ""
	I1216 06:41:47.908604 1639474 logs.go:282] 0 containers: []
	W1216 06:41:47.908611 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:47.908617 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:47.908685 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:47.942423 1639474 cri.go:89] found id: ""
	I1216 06:41:47.942438 1639474 logs.go:282] 0 containers: []
	W1216 06:41:47.942445 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:47.942450 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:47.942518 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:47.977188 1639474 cri.go:89] found id: ""
	I1216 06:41:47.977210 1639474 logs.go:282] 0 containers: []
	W1216 06:41:47.977218 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:47.977223 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:47.977302 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:48.011589 1639474 cri.go:89] found id: ""
	I1216 06:41:48.011604 1639474 logs.go:282] 0 containers: []
	W1216 06:41:48.011623 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:48.011637 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:48.011649 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:48.090336 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:48.090357 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:48.106676 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:48.106693 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:48.174952 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:48.165517   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:48.166169   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:48.168421   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:48.169339   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:48.170443   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:48.165517   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:48.166169   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:48.168421   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:48.169339   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:48.170443   15065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:48.174963 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:48.174975 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:48.244365 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:48.244386 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:50.777766 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:50.790374 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:50.790436 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:50.817848 1639474 cri.go:89] found id: ""
	I1216 06:41:50.817863 1639474 logs.go:282] 0 containers: []
	W1216 06:41:50.817870 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:50.817875 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:50.817947 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:50.848261 1639474 cri.go:89] found id: ""
	I1216 06:41:50.848277 1639474 logs.go:282] 0 containers: []
	W1216 06:41:50.848285 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:50.848290 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:50.848357 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:50.875745 1639474 cri.go:89] found id: ""
	I1216 06:41:50.875771 1639474 logs.go:282] 0 containers: []
	W1216 06:41:50.875779 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:50.875784 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:50.875857 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:50.908128 1639474 cri.go:89] found id: ""
	I1216 06:41:50.908142 1639474 logs.go:282] 0 containers: []
	W1216 06:41:50.908149 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:50.908154 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:50.908216 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:50.945866 1639474 cri.go:89] found id: ""
	I1216 06:41:50.945880 1639474 logs.go:282] 0 containers: []
	W1216 06:41:50.945897 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:50.945906 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:50.945988 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:50.976758 1639474 cri.go:89] found id: ""
	I1216 06:41:50.976772 1639474 logs.go:282] 0 containers: []
	W1216 06:41:50.976779 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:50.976790 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:50.976862 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:51.012047 1639474 cri.go:89] found id: ""
	I1216 06:41:51.012061 1639474 logs.go:282] 0 containers: []
	W1216 06:41:51.012080 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:51.012088 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:51.012099 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:51.079840 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:51.079863 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:51.095967 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:51.095984 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:51.168911 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:51.158269   15168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:51.159160   15168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:51.161023   15168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:51.161808   15168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:51.163880   15168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:51.158269   15168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:51.159160   15168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:51.161023   15168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:51.161808   15168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:51.163880   15168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:51.168920 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:51.168932 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:51.241258 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:51.241281 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:53.774859 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:53.785580 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:53.785647 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:53.815910 1639474 cri.go:89] found id: ""
	I1216 06:41:53.815946 1639474 logs.go:282] 0 containers: []
	W1216 06:41:53.815954 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:53.815960 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:53.816034 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:53.843197 1639474 cri.go:89] found id: ""
	I1216 06:41:53.843220 1639474 logs.go:282] 0 containers: []
	W1216 06:41:53.843228 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:53.843233 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:53.843303 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:53.869584 1639474 cri.go:89] found id: ""
	I1216 06:41:53.869598 1639474 logs.go:282] 0 containers: []
	W1216 06:41:53.869605 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:53.869610 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:53.869672 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:53.898126 1639474 cri.go:89] found id: ""
	I1216 06:41:53.898141 1639474 logs.go:282] 0 containers: []
	W1216 06:41:53.898148 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:53.898154 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:53.898217 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:53.935008 1639474 cri.go:89] found id: ""
	I1216 06:41:53.935022 1639474 logs.go:282] 0 containers: []
	W1216 06:41:53.935029 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:53.935033 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:53.935094 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:53.971715 1639474 cri.go:89] found id: ""
	I1216 06:41:53.971729 1639474 logs.go:282] 0 containers: []
	W1216 06:41:53.971740 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:53.971745 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:53.971827 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:54.004089 1639474 cri.go:89] found id: ""
	I1216 06:41:54.004107 1639474 logs.go:282] 0 containers: []
	W1216 06:41:54.004115 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:54.004138 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:54.004151 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:54.072434 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:54.072455 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:54.088417 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:54.088436 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:54.154720 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:54.146355   15274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:54.146923   15274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:54.148518   15274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:54.149322   15274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:54.150888   15274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:54.146355   15274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:54.146923   15274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:54.148518   15274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:54.149322   15274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:54.150888   15274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:54.154730 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:54.154741 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:54.223744 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:54.223763 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:56.753558 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:56.764118 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:56.764182 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:56.789865 1639474 cri.go:89] found id: ""
	I1216 06:41:56.789879 1639474 logs.go:282] 0 containers: []
	W1216 06:41:56.789886 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:56.789891 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:56.789954 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:56.815375 1639474 cri.go:89] found id: ""
	I1216 06:41:56.815390 1639474 logs.go:282] 0 containers: []
	W1216 06:41:56.815396 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:56.815401 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:56.815458 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:56.843367 1639474 cri.go:89] found id: ""
	I1216 06:41:56.843381 1639474 logs.go:282] 0 containers: []
	W1216 06:41:56.843389 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:56.843394 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:56.843453 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:56.869235 1639474 cri.go:89] found id: ""
	I1216 06:41:56.869249 1639474 logs.go:282] 0 containers: []
	W1216 06:41:56.869263 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:56.869268 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:56.869325 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:56.894296 1639474 cri.go:89] found id: ""
	I1216 06:41:56.894310 1639474 logs.go:282] 0 containers: []
	W1216 06:41:56.894318 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:56.894323 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:56.894393 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:56.930771 1639474 cri.go:89] found id: ""
	I1216 06:41:56.930786 1639474 logs.go:282] 0 containers: []
	W1216 06:41:56.930795 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:56.930800 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:56.930877 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:56.961829 1639474 cri.go:89] found id: ""
	I1216 06:41:56.961855 1639474 logs.go:282] 0 containers: []
	W1216 06:41:56.961862 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:56.961869 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:41:56.961880 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:41:56.982515 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:56.982532 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:41:57.053403 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:57.042504   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:57.043169   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:57.044928   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:57.047094   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:57.047728   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:57.042504   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:57.043169   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:57.044928   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:57.047094   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:57.047728   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:41:57.053413 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:41:57.053424 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:41:57.122315 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:41:57.122338 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:41:57.151668 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:41:57.151684 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:41:59.721370 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:41:59.731285 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:41:59.731355 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:41:59.759821 1639474 cri.go:89] found id: ""
	I1216 06:41:59.759835 1639474 logs.go:282] 0 containers: []
	W1216 06:41:59.759843 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:41:59.759848 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:41:59.759905 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:41:59.784708 1639474 cri.go:89] found id: ""
	I1216 06:41:59.784721 1639474 logs.go:282] 0 containers: []
	W1216 06:41:59.784728 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:41:59.784733 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:41:59.784791 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:41:59.810181 1639474 cri.go:89] found id: ""
	I1216 06:41:59.810196 1639474 logs.go:282] 0 containers: []
	W1216 06:41:59.810204 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:41:59.810209 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:41:59.810268 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:41:59.836051 1639474 cri.go:89] found id: ""
	I1216 06:41:59.836072 1639474 logs.go:282] 0 containers: []
	W1216 06:41:59.836082 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:41:59.836094 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:41:59.836177 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:41:59.860701 1639474 cri.go:89] found id: ""
	I1216 06:41:59.860714 1639474 logs.go:282] 0 containers: []
	W1216 06:41:59.860722 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:41:59.860727 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:41:59.860786 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:41:59.885062 1639474 cri.go:89] found id: ""
	I1216 06:41:59.885076 1639474 logs.go:282] 0 containers: []
	W1216 06:41:59.885092 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:41:59.885098 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:41:59.885154 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:41:59.926044 1639474 cri.go:89] found id: ""
	I1216 06:41:59.926058 1639474 logs.go:282] 0 containers: []
	W1216 06:41:59.926065 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:41:59.926073 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:41:59.926099 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:00.037850 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:41:59.990877   15478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:59.991479   15478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:59.993112   15478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:59.993660   15478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:59.995321   15478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:41:59.990877   15478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:59.991479   15478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:59.993112   15478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:59.993660   15478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:41:59.995321   15478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:00.037864 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:00.037877 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:00.264777 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:00.264802 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:00.361496 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:00.361518 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:00.460153 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:00.460175 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:02.976790 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:02.987102 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:02.987180 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:03.015111 1639474 cri.go:89] found id: ""
	I1216 06:42:03.015126 1639474 logs.go:282] 0 containers: []
	W1216 06:42:03.015133 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:03.015139 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:03.015202 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:03.040871 1639474 cri.go:89] found id: ""
	I1216 06:42:03.040903 1639474 logs.go:282] 0 containers: []
	W1216 06:42:03.040910 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:03.040915 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:03.040977 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:03.065726 1639474 cri.go:89] found id: ""
	I1216 06:42:03.065740 1639474 logs.go:282] 0 containers: []
	W1216 06:42:03.065748 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:03.065754 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:03.065813 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:03.090951 1639474 cri.go:89] found id: ""
	I1216 06:42:03.090966 1639474 logs.go:282] 0 containers: []
	W1216 06:42:03.090973 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:03.090979 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:03.091037 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:03.119521 1639474 cri.go:89] found id: ""
	I1216 06:42:03.119536 1639474 logs.go:282] 0 containers: []
	W1216 06:42:03.119543 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:03.119549 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:03.119615 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:03.147166 1639474 cri.go:89] found id: ""
	I1216 06:42:03.147181 1639474 logs.go:282] 0 containers: []
	W1216 06:42:03.147188 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:03.147193 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:03.147267 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:03.172021 1639474 cri.go:89] found id: ""
	I1216 06:42:03.172035 1639474 logs.go:282] 0 containers: []
	W1216 06:42:03.172042 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:03.172050 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:03.172060 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:03.186822 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:03.186838 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:03.250765 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:03.242422   15588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:03.243046   15588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:03.244675   15588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:03.245279   15588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:03.246834   15588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:03.242422   15588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:03.243046   15588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:03.244675   15588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:03.245279   15588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:03.246834   15588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:03.250775 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:03.250786 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:03.325562 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:03.325590 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:03.355074 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:03.355093 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:05.922524 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:05.932734 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:05.932804 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:05.960790 1639474 cri.go:89] found id: ""
	I1216 06:42:05.960804 1639474 logs.go:282] 0 containers: []
	W1216 06:42:05.960811 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:05.960816 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:05.960884 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:05.986356 1639474 cri.go:89] found id: ""
	I1216 06:42:05.986386 1639474 logs.go:282] 0 containers: []
	W1216 06:42:05.986394 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:05.986399 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:05.986458 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:06.015030 1639474 cri.go:89] found id: ""
	I1216 06:42:06.015046 1639474 logs.go:282] 0 containers: []
	W1216 06:42:06.015053 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:06.015058 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:06.015119 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:06.041009 1639474 cri.go:89] found id: ""
	I1216 06:42:06.041023 1639474 logs.go:282] 0 containers: []
	W1216 06:42:06.041030 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:06.041035 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:06.041091 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:06.068292 1639474 cri.go:89] found id: ""
	I1216 06:42:06.068306 1639474 logs.go:282] 0 containers: []
	W1216 06:42:06.068314 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:06.068319 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:06.068375 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:06.100555 1639474 cri.go:89] found id: ""
	I1216 06:42:06.100569 1639474 logs.go:282] 0 containers: []
	W1216 06:42:06.100576 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:06.100582 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:06.100642 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:06.132353 1639474 cri.go:89] found id: ""
	I1216 06:42:06.132367 1639474 logs.go:282] 0 containers: []
	W1216 06:42:06.132374 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:06.132382 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:06.132392 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:06.201249 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:06.192521   15689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:06.193141   15689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:06.194767   15689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:06.195329   15689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:06.197078   15689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:06.192521   15689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:06.193141   15689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:06.194767   15689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:06.195329   15689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:06.197078   15689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:06.201259 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:06.201271 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:06.271083 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:06.271102 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:06.300840 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:06.300857 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:06.369023 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:06.369043 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:08.885532 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:08.897655 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:08.897714 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:08.929123 1639474 cri.go:89] found id: ""
	I1216 06:42:08.929137 1639474 logs.go:282] 0 containers: []
	W1216 06:42:08.929144 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:08.929149 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:08.929216 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:08.969020 1639474 cri.go:89] found id: ""
	I1216 06:42:08.969036 1639474 logs.go:282] 0 containers: []
	W1216 06:42:08.969043 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:08.969049 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:08.969107 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:08.995554 1639474 cri.go:89] found id: ""
	I1216 06:42:08.995569 1639474 logs.go:282] 0 containers: []
	W1216 06:42:08.995577 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:08.995582 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:08.995642 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:09.023705 1639474 cri.go:89] found id: ""
	I1216 06:42:09.023720 1639474 logs.go:282] 0 containers: []
	W1216 06:42:09.023727 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:09.023732 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:09.023795 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:09.050243 1639474 cri.go:89] found id: ""
	I1216 06:42:09.050263 1639474 logs.go:282] 0 containers: []
	W1216 06:42:09.050270 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:09.050275 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:09.050332 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:09.075763 1639474 cri.go:89] found id: ""
	I1216 06:42:09.075778 1639474 logs.go:282] 0 containers: []
	W1216 06:42:09.075786 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:09.075791 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:09.075847 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:09.102027 1639474 cri.go:89] found id: ""
	I1216 06:42:09.102042 1639474 logs.go:282] 0 containers: []
	W1216 06:42:09.102050 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:09.102058 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:09.102072 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:09.131304 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:09.131322 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:09.197595 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:09.197616 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:09.214311 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:09.214329 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:09.280261 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:09.272137   15812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:09.272954   15812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:09.274571   15812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:09.274879   15812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:09.276370   15812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:09.272137   15812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:09.272954   15812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:09.274571   15812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:09.274879   15812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:09.276370   15812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:09.280272 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:09.280287 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:11.849647 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:11.859759 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:11.859820 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:11.885934 1639474 cri.go:89] found id: ""
	I1216 06:42:11.885948 1639474 logs.go:282] 0 containers: []
	W1216 06:42:11.885955 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:11.885960 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:11.886024 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:11.915333 1639474 cri.go:89] found id: ""
	I1216 06:42:11.915347 1639474 logs.go:282] 0 containers: []
	W1216 06:42:11.915354 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:11.915359 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:11.915420 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:11.958797 1639474 cri.go:89] found id: ""
	I1216 06:42:11.958811 1639474 logs.go:282] 0 containers: []
	W1216 06:42:11.958818 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:11.958823 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:11.958882 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:11.986843 1639474 cri.go:89] found id: ""
	I1216 06:42:11.986858 1639474 logs.go:282] 0 containers: []
	W1216 06:42:11.986865 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:11.986870 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:11.986928 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:12.016252 1639474 cri.go:89] found id: ""
	I1216 06:42:12.016268 1639474 logs.go:282] 0 containers: []
	W1216 06:42:12.016275 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:12.016280 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:12.016340 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:12.047250 1639474 cri.go:89] found id: ""
	I1216 06:42:12.047264 1639474 logs.go:282] 0 containers: []
	W1216 06:42:12.047271 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:12.047276 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:12.047334 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:12.073692 1639474 cri.go:89] found id: ""
	I1216 06:42:12.073706 1639474 logs.go:282] 0 containers: []
	W1216 06:42:12.073713 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:12.073721 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:12.073732 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:12.137759 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:12.129267   15900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:12.129895   15900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:12.131416   15900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:12.131890   15900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:12.133511   15900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:12.129267   15900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:12.129895   15900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:12.131416   15900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:12.131890   15900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:12.133511   15900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:12.137769 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:12.137780 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:12.206794 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:12.206815 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:12.235894 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:12.235910 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:12.304248 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:12.304267 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:14.819229 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:14.829519 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:14.829579 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:14.854644 1639474 cri.go:89] found id: ""
	I1216 06:42:14.854658 1639474 logs.go:282] 0 containers: []
	W1216 06:42:14.854665 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:14.854670 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:14.854744 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:14.879759 1639474 cri.go:89] found id: ""
	I1216 06:42:14.879774 1639474 logs.go:282] 0 containers: []
	W1216 06:42:14.879781 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:14.879785 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:14.879846 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:14.914620 1639474 cri.go:89] found id: ""
	I1216 06:42:14.914633 1639474 logs.go:282] 0 containers: []
	W1216 06:42:14.914640 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:14.914645 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:14.914706 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:14.949457 1639474 cri.go:89] found id: ""
	I1216 06:42:14.949470 1639474 logs.go:282] 0 containers: []
	W1216 06:42:14.949477 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:14.949482 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:14.949539 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:14.978393 1639474 cri.go:89] found id: ""
	I1216 06:42:14.978407 1639474 logs.go:282] 0 containers: []
	W1216 06:42:14.978414 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:14.978419 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:14.978485 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:15.059438 1639474 cri.go:89] found id: ""
	I1216 06:42:15.059454 1639474 logs.go:282] 0 containers: []
	W1216 06:42:15.059468 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:15.059474 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:15.059560 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:15.087173 1639474 cri.go:89] found id: ""
	I1216 06:42:15.087188 1639474 logs.go:282] 0 containers: []
	W1216 06:42:15.087194 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:15.087202 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:15.087212 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:15.157589 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:15.157610 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:15.187757 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:15.187774 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:15.256722 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:15.256742 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:15.271447 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:15.271464 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:15.332113 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:15.323890   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:15.324640   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:15.325693   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:15.326204   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:15.327840   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:15.323890   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:15.324640   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:15.325693   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:15.326204   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:15.327840   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:17.832401 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:17.842950 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:17.843012 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:17.871468 1639474 cri.go:89] found id: ""
	I1216 06:42:17.871483 1639474 logs.go:282] 0 containers: []
	W1216 06:42:17.871490 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:17.871496 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:17.871554 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:17.904274 1639474 cri.go:89] found id: ""
	I1216 06:42:17.904288 1639474 logs.go:282] 0 containers: []
	W1216 06:42:17.904295 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:17.904299 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:17.904355 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:17.936320 1639474 cri.go:89] found id: ""
	I1216 06:42:17.936334 1639474 logs.go:282] 0 containers: []
	W1216 06:42:17.936341 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:17.936346 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:17.936403 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:17.967750 1639474 cri.go:89] found id: ""
	I1216 06:42:17.967764 1639474 logs.go:282] 0 containers: []
	W1216 06:42:17.967771 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:17.967775 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:17.967833 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:17.993994 1639474 cri.go:89] found id: ""
	I1216 06:42:17.994008 1639474 logs.go:282] 0 containers: []
	W1216 06:42:17.994016 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:17.994021 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:17.994085 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:18.021367 1639474 cri.go:89] found id: ""
	I1216 06:42:18.021382 1639474 logs.go:282] 0 containers: []
	W1216 06:42:18.021390 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:18.021395 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:18.021463 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:18.052100 1639474 cri.go:89] found id: ""
	I1216 06:42:18.052115 1639474 logs.go:282] 0 containers: []
	W1216 06:42:18.052122 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:18.052130 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:18.052141 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:18.117261 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:18.117282 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:18.132219 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:18.132235 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:18.198118 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:18.189377   16116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:18.189937   16116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:18.191659   16116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:18.192181   16116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:18.193769   16116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:18.189377   16116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:18.189937   16116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:18.191659   16116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:18.192181   16116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:18.193769   16116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:18.198128 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:18.198139 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:18.265118 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:18.265138 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:20.794027 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:20.803718 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:20.803782 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:20.828191 1639474 cri.go:89] found id: ""
	I1216 06:42:20.828205 1639474 logs.go:282] 0 containers: []
	W1216 06:42:20.828212 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:20.828217 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:20.828278 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:20.853411 1639474 cri.go:89] found id: ""
	I1216 06:42:20.853425 1639474 logs.go:282] 0 containers: []
	W1216 06:42:20.853432 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:20.853437 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:20.853499 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:20.877825 1639474 cri.go:89] found id: ""
	I1216 06:42:20.877841 1639474 logs.go:282] 0 containers: []
	W1216 06:42:20.877848 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:20.877853 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:20.877908 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:20.910891 1639474 cri.go:89] found id: ""
	I1216 06:42:20.910904 1639474 logs.go:282] 0 containers: []
	W1216 06:42:20.910911 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:20.910916 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:20.910973 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:20.941025 1639474 cri.go:89] found id: ""
	I1216 06:42:20.941039 1639474 logs.go:282] 0 containers: []
	W1216 06:42:20.941045 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:20.941050 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:20.941108 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:20.973633 1639474 cri.go:89] found id: ""
	I1216 06:42:20.973647 1639474 logs.go:282] 0 containers: []
	W1216 06:42:20.973654 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:20.973659 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:20.973714 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:21.002805 1639474 cri.go:89] found id: ""
	I1216 06:42:21.002821 1639474 logs.go:282] 0 containers: []
	W1216 06:42:21.002828 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:21.002837 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:21.002849 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:21.068941 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:21.068961 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:21.083829 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:21.083853 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:21.147337 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:21.139664   16218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:21.140092   16218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:21.141644   16218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:21.141963   16218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:21.143475   16218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:21.139664   16218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:21.140092   16218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:21.141644   16218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:21.141963   16218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:21.143475   16218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:21.147347 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:21.147359 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:21.215583 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:21.215604 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:23.745376 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:23.755709 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:23.755771 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:23.781141 1639474 cri.go:89] found id: ""
	I1216 06:42:23.781155 1639474 logs.go:282] 0 containers: []
	W1216 06:42:23.781162 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:23.781168 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:23.781234 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:23.811661 1639474 cri.go:89] found id: ""
	I1216 06:42:23.811675 1639474 logs.go:282] 0 containers: []
	W1216 06:42:23.811683 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:23.811687 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:23.811745 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:23.837608 1639474 cri.go:89] found id: ""
	I1216 06:42:23.837623 1639474 logs.go:282] 0 containers: []
	W1216 06:42:23.837630 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:23.837635 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:23.837694 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:23.864015 1639474 cri.go:89] found id: ""
	I1216 06:42:23.864041 1639474 logs.go:282] 0 containers: []
	W1216 06:42:23.864051 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:23.864057 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:23.864124 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:23.889789 1639474 cri.go:89] found id: ""
	I1216 06:42:23.889806 1639474 logs.go:282] 0 containers: []
	W1216 06:42:23.889813 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:23.889818 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:23.889877 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:23.918576 1639474 cri.go:89] found id: ""
	I1216 06:42:23.918590 1639474 logs.go:282] 0 containers: []
	W1216 06:42:23.918598 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:23.918603 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:23.918661 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:23.950516 1639474 cri.go:89] found id: ""
	I1216 06:42:23.950531 1639474 logs.go:282] 0 containers: []
	W1216 06:42:23.950537 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:23.950545 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:23.950555 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:23.980911 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:23.980928 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:24.047333 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:24.047355 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:24.063020 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:24.063037 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:24.131565 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:24.123164   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:24.124006   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:24.125798   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:24.126123   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:24.127396   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:24.123164   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:24.124006   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:24.125798   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:24.126123   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:24.127396   16330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:24.131574 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:24.131593 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:26.704797 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:26.715064 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:26.715144 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:26.741016 1639474 cri.go:89] found id: ""
	I1216 06:42:26.741030 1639474 logs.go:282] 0 containers: []
	W1216 06:42:26.741037 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:26.741043 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:26.741102 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:26.771178 1639474 cri.go:89] found id: ""
	I1216 06:42:26.771192 1639474 logs.go:282] 0 containers: []
	W1216 06:42:26.771200 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:26.771205 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:26.771263 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:26.796426 1639474 cri.go:89] found id: ""
	I1216 06:42:26.796440 1639474 logs.go:282] 0 containers: []
	W1216 06:42:26.796447 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:26.796452 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:26.796530 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:26.822428 1639474 cri.go:89] found id: ""
	I1216 06:42:26.822444 1639474 logs.go:282] 0 containers: []
	W1216 06:42:26.822451 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:26.822456 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:26.822512 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:26.855530 1639474 cri.go:89] found id: ""
	I1216 06:42:26.855545 1639474 logs.go:282] 0 containers: []
	W1216 06:42:26.855552 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:26.855557 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:26.855617 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:26.880135 1639474 cri.go:89] found id: ""
	I1216 06:42:26.880149 1639474 logs.go:282] 0 containers: []
	W1216 06:42:26.880156 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:26.880161 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:26.880219 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:26.917307 1639474 cri.go:89] found id: ""
	I1216 06:42:26.917321 1639474 logs.go:282] 0 containers: []
	W1216 06:42:26.917327 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:26.917335 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:26.917347 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:26.997666 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:26.997690 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:27.033638 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:27.033662 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:27.104861 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:27.104880 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:27.119683 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:27.119699 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:27.187945 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:27.180063   16436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:27.180759   16436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:27.182494   16436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:27.183036   16436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:27.184032   16436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:27.180063   16436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:27.180759   16436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:27.182494   16436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:27.183036   16436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:27.184032   16436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:29.688270 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:29.698566 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:29.698629 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:29.724518 1639474 cri.go:89] found id: ""
	I1216 06:42:29.724532 1639474 logs.go:282] 0 containers: []
	W1216 06:42:29.724539 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:29.724544 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:29.724605 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:29.749436 1639474 cri.go:89] found id: ""
	I1216 06:42:29.749451 1639474 logs.go:282] 0 containers: []
	W1216 06:42:29.749458 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:29.749463 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:29.749525 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:29.774261 1639474 cri.go:89] found id: ""
	I1216 06:42:29.774276 1639474 logs.go:282] 0 containers: []
	W1216 06:42:29.774283 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:29.774290 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:29.774349 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:29.799455 1639474 cri.go:89] found id: ""
	I1216 06:42:29.799469 1639474 logs.go:282] 0 containers: []
	W1216 06:42:29.799478 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:29.799483 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:29.799541 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:29.823692 1639474 cri.go:89] found id: ""
	I1216 06:42:29.823707 1639474 logs.go:282] 0 containers: []
	W1216 06:42:29.823714 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:29.823718 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:29.823784 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:29.851131 1639474 cri.go:89] found id: ""
	I1216 06:42:29.851156 1639474 logs.go:282] 0 containers: []
	W1216 06:42:29.851164 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:29.851169 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:29.851239 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:29.875892 1639474 cri.go:89] found id: ""
	I1216 06:42:29.875906 1639474 logs.go:282] 0 containers: []
	W1216 06:42:29.875923 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:29.875931 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:29.875942 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:29.949752 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:29.949772 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:29.966843 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:29.966860 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:30.075177 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:30.040929   16528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:30.058446   16528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:30.059998   16528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:30.060413   16528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:30.066181   16528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:30.040929   16528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:30.058446   16528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:30.059998   16528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:30.060413   16528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:30.066181   16528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:30.075189 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:30.075201 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:30.153503 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:30.153525 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:32.683959 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:32.695552 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:32.695611 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:32.719250 1639474 cri.go:89] found id: ""
	I1216 06:42:32.719264 1639474 logs.go:282] 0 containers: []
	W1216 06:42:32.719271 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:32.719276 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:32.719335 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:32.744437 1639474 cri.go:89] found id: ""
	I1216 06:42:32.744451 1639474 logs.go:282] 0 containers: []
	W1216 06:42:32.744459 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:32.744464 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:32.744568 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:32.772181 1639474 cri.go:89] found id: ""
	I1216 06:42:32.772196 1639474 logs.go:282] 0 containers: []
	W1216 06:42:32.772204 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:32.772209 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:32.772273 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:32.799021 1639474 cri.go:89] found id: ""
	I1216 06:42:32.799035 1639474 logs.go:282] 0 containers: []
	W1216 06:42:32.799041 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:32.799046 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:32.799103 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:32.826452 1639474 cri.go:89] found id: ""
	I1216 06:42:32.826466 1639474 logs.go:282] 0 containers: []
	W1216 06:42:32.826473 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:32.826478 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:32.826535 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:32.854867 1639474 cri.go:89] found id: ""
	I1216 06:42:32.854881 1639474 logs.go:282] 0 containers: []
	W1216 06:42:32.854888 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:32.854893 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:32.854953 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:32.883584 1639474 cri.go:89] found id: ""
	I1216 06:42:32.883608 1639474 logs.go:282] 0 containers: []
	W1216 06:42:32.883615 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:32.883624 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:32.883635 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:32.969443 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:32.969472 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:33.000330 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:33.000354 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:33.068289 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:33.068311 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:33.083127 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:33.083145 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:33.154304 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:33.145404   16644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:33.146150   16644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:33.147831   16644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:33.148385   16644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:33.150308   16644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:33.145404   16644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:33.146150   16644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:33.147831   16644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:33.148385   16644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:33.150308   16644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:35.655139 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:35.665534 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:35.665616 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:35.691995 1639474 cri.go:89] found id: ""
	I1216 06:42:35.692009 1639474 logs.go:282] 0 containers: []
	W1216 06:42:35.692016 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:35.692021 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:35.692079 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:35.718728 1639474 cri.go:89] found id: ""
	I1216 06:42:35.718742 1639474 logs.go:282] 0 containers: []
	W1216 06:42:35.718748 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:35.718753 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:35.718812 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:35.743314 1639474 cri.go:89] found id: ""
	I1216 06:42:35.743328 1639474 logs.go:282] 0 containers: []
	W1216 06:42:35.743334 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:35.743339 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:35.743400 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:35.767871 1639474 cri.go:89] found id: ""
	I1216 06:42:35.767885 1639474 logs.go:282] 0 containers: []
	W1216 06:42:35.767893 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:35.767897 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:35.767958 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:35.791769 1639474 cri.go:89] found id: ""
	I1216 06:42:35.791783 1639474 logs.go:282] 0 containers: []
	W1216 06:42:35.791790 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:35.791795 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:35.791854 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:35.819002 1639474 cri.go:89] found id: ""
	I1216 06:42:35.819016 1639474 logs.go:282] 0 containers: []
	W1216 06:42:35.819023 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:35.819028 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:35.819083 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:35.843378 1639474 cri.go:89] found id: ""
	I1216 06:42:35.843392 1639474 logs.go:282] 0 containers: []
	W1216 06:42:35.843399 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:35.843407 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:35.843417 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:35.912874 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:35.912893 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:35.930936 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:35.930952 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:36.006314 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:35.994455   16736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:35.995222   16736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:35.996889   16736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:35.997269   16736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:35.999528   16736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:35.994455   16736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:35.995222   16736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:35.996889   16736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:35.997269   16736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:35.999528   16736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:36.006326 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:36.006338 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:36.080077 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:36.080099 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:38.612139 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:38.622353 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:38.622412 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:38.648583 1639474 cri.go:89] found id: ""
	I1216 06:42:38.648597 1639474 logs.go:282] 0 containers: []
	W1216 06:42:38.648604 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:38.648613 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:38.648671 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:38.674035 1639474 cri.go:89] found id: ""
	I1216 06:42:38.674049 1639474 logs.go:282] 0 containers: []
	W1216 06:42:38.674056 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:38.674061 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:38.674119 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:38.699213 1639474 cri.go:89] found id: ""
	I1216 06:42:38.699228 1639474 logs.go:282] 0 containers: []
	W1216 06:42:38.699234 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:38.699239 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:38.699294 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:38.723415 1639474 cri.go:89] found id: ""
	I1216 06:42:38.723429 1639474 logs.go:282] 0 containers: []
	W1216 06:42:38.723436 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:38.723441 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:38.723499 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:38.751059 1639474 cri.go:89] found id: ""
	I1216 06:42:38.751074 1639474 logs.go:282] 0 containers: []
	W1216 06:42:38.751081 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:38.751086 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:38.751146 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:38.779542 1639474 cri.go:89] found id: ""
	I1216 06:42:38.779557 1639474 logs.go:282] 0 containers: []
	W1216 06:42:38.779584 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:38.779589 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:38.779660 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:38.813466 1639474 cri.go:89] found id: ""
	I1216 06:42:38.813480 1639474 logs.go:282] 0 containers: []
	W1216 06:42:38.813488 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:38.813496 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:38.813507 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:38.842140 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:38.842158 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:38.908007 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:38.908027 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:38.923600 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:38.923618 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:38.995488 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:38.986888   16852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:38.987379   16852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:38.988908   16852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:38.989502   16852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:38.991340   16852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:38.986888   16852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:38.987379   16852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:38.988908   16852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:38.989502   16852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:38.991340   16852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:38.995498 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:38.995509 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:41.565694 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:41.575799 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:41.575860 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:41.600796 1639474 cri.go:89] found id: ""
	I1216 06:42:41.600811 1639474 logs.go:282] 0 containers: []
	W1216 06:42:41.600817 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:41.600822 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:41.600879 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:41.625792 1639474 cri.go:89] found id: ""
	I1216 06:42:41.625807 1639474 logs.go:282] 0 containers: []
	W1216 06:42:41.625814 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:41.625818 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:41.625875 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:41.650243 1639474 cri.go:89] found id: ""
	I1216 06:42:41.650257 1639474 logs.go:282] 0 containers: []
	W1216 06:42:41.650264 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:41.650269 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:41.650328 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:41.675889 1639474 cri.go:89] found id: ""
	I1216 06:42:41.675915 1639474 logs.go:282] 0 containers: []
	W1216 06:42:41.675923 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:41.675928 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:41.675993 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:41.703050 1639474 cri.go:89] found id: ""
	I1216 06:42:41.703064 1639474 logs.go:282] 0 containers: []
	W1216 06:42:41.703082 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:41.703088 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:41.703146 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:41.729269 1639474 cri.go:89] found id: ""
	I1216 06:42:41.729283 1639474 logs.go:282] 0 containers: []
	W1216 06:42:41.729293 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:41.729299 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:41.729369 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:41.753781 1639474 cri.go:89] found id: ""
	I1216 06:42:41.753796 1639474 logs.go:282] 0 containers: []
	W1216 06:42:41.753803 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:41.753811 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:41.753821 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:41.783522 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:41.783538 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:41.848274 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:41.848295 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:41.863600 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:41.863618 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:41.936160 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:41.927245   16955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:41.928139   16955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:41.929727   16955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:41.930266   16955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:41.931845   16955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:41.927245   16955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:41.928139   16955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:41.929727   16955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:41.930266   16955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:41.931845   16955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:41.936170 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:41.936181 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:44.511341 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:44.521587 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:44.521648 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:44.547007 1639474 cri.go:89] found id: ""
	I1216 06:42:44.547021 1639474 logs.go:282] 0 containers: []
	W1216 06:42:44.547028 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:44.547033 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:44.547096 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:44.572902 1639474 cri.go:89] found id: ""
	I1216 06:42:44.572917 1639474 logs.go:282] 0 containers: []
	W1216 06:42:44.572924 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:44.572928 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:44.572995 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:44.598645 1639474 cri.go:89] found id: ""
	I1216 06:42:44.598659 1639474 logs.go:282] 0 containers: []
	W1216 06:42:44.598667 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:44.598672 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:44.598731 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:44.627366 1639474 cri.go:89] found id: ""
	I1216 06:42:44.627381 1639474 logs.go:282] 0 containers: []
	W1216 06:42:44.627388 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:44.627396 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:44.627452 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:44.654294 1639474 cri.go:89] found id: ""
	I1216 06:42:44.654309 1639474 logs.go:282] 0 containers: []
	W1216 06:42:44.654319 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:44.654324 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:44.654382 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:44.679363 1639474 cri.go:89] found id: ""
	I1216 06:42:44.679378 1639474 logs.go:282] 0 containers: []
	W1216 06:42:44.679385 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:44.679392 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:44.679452 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:44.714760 1639474 cri.go:89] found id: ""
	I1216 06:42:44.714775 1639474 logs.go:282] 0 containers: []
	W1216 06:42:44.714781 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:44.714789 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:44.714800 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:44.779035 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:44.779055 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:44.793727 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:44.793745 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:44.860570 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:44.851694   17051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:44.852237   17051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:44.853933   17051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:44.854480   17051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:44.856105   17051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:44.851694   17051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:44.852237   17051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:44.853933   17051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:44.854480   17051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:44.856105   17051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:44.860581 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:44.860594 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:44.934290 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:44.934310 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:47.465385 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:47.475377 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:47.475436 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:47.503015 1639474 cri.go:89] found id: ""
	I1216 06:42:47.503042 1639474 logs.go:282] 0 containers: []
	W1216 06:42:47.503049 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:47.503055 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:47.503136 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:47.528903 1639474 cri.go:89] found id: ""
	I1216 06:42:47.528917 1639474 logs.go:282] 0 containers: []
	W1216 06:42:47.528924 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:47.528929 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:47.528989 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:47.554766 1639474 cri.go:89] found id: ""
	I1216 06:42:47.554781 1639474 logs.go:282] 0 containers: []
	W1216 06:42:47.554788 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:47.554792 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:47.554858 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:47.585092 1639474 cri.go:89] found id: ""
	I1216 06:42:47.585106 1639474 logs.go:282] 0 containers: []
	W1216 06:42:47.585113 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:47.585118 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:47.585214 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:47.610493 1639474 cri.go:89] found id: ""
	I1216 06:42:47.610508 1639474 logs.go:282] 0 containers: []
	W1216 06:42:47.610514 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:47.610519 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:47.610577 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:47.635340 1639474 cri.go:89] found id: ""
	I1216 06:42:47.635354 1639474 logs.go:282] 0 containers: []
	W1216 06:42:47.635361 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:47.635365 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:47.635424 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:47.661321 1639474 cri.go:89] found id: ""
	I1216 06:42:47.661335 1639474 logs.go:282] 0 containers: []
	W1216 06:42:47.661342 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:47.661349 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:47.661360 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:47.726879 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:47.726898 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:47.741659 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:47.741684 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:47.804784 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:47.796440   17154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:47.797188   17154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:47.798787   17154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:47.799294   17154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:47.800945   17154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:47.796440   17154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:47.797188   17154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:47.798787   17154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:47.799294   17154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:47.800945   17154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:47.804795 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:47.804807 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:47.871075 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:47.871096 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:50.410207 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:50.419946 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:42:50.420007 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:42:50.446668 1639474 cri.go:89] found id: ""
	I1216 06:42:50.446683 1639474 logs.go:282] 0 containers: []
	W1216 06:42:50.446689 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:42:50.446694 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:42:50.446753 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:42:50.471089 1639474 cri.go:89] found id: ""
	I1216 06:42:50.471119 1639474 logs.go:282] 0 containers: []
	W1216 06:42:50.471126 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:42:50.471131 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:42:50.471201 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:42:50.496821 1639474 cri.go:89] found id: ""
	I1216 06:42:50.496836 1639474 logs.go:282] 0 containers: []
	W1216 06:42:50.496843 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:42:50.496848 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:42:50.496906 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:42:50.522621 1639474 cri.go:89] found id: ""
	I1216 06:42:50.522647 1639474 logs.go:282] 0 containers: []
	W1216 06:42:50.522655 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:42:50.522660 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:42:50.522720 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:42:50.547813 1639474 cri.go:89] found id: ""
	I1216 06:42:50.547828 1639474 logs.go:282] 0 containers: []
	W1216 06:42:50.547847 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:42:50.547858 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:42:50.547926 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:42:50.573695 1639474 cri.go:89] found id: ""
	I1216 06:42:50.573709 1639474 logs.go:282] 0 containers: []
	W1216 06:42:50.573716 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:42:50.573734 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:42:50.573791 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:42:50.597701 1639474 cri.go:89] found id: ""
	I1216 06:42:50.597728 1639474 logs.go:282] 0 containers: []
	W1216 06:42:50.597735 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:42:50.597743 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:42:50.597754 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:42:50.634166 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:42:50.634183 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:42:50.700131 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:42:50.700152 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:42:50.714678 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:42:50.714694 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:42:50.782436 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:42:50.773358   17266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:50.773772   17266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:50.775550   17266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:50.775862   17266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:50.778084   17266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:42:50.773358   17266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:50.773772   17266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:50.775550   17266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:50.775862   17266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:42:50.778084   17266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:42:50.782446 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:42:50.782457 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:42:53.352592 1639474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:42:53.362386 1639474 kubeadm.go:602] duration metric: took 4m3.23343297s to restartPrimaryControlPlane
	W1216 06:42:53.362440 1639474 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 06:42:53.362522 1639474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 06:42:53.770157 1639474 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 06:42:53.783560 1639474 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 06:42:53.791651 1639474 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 06:42:53.791714 1639474 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 06:42:53.800044 1639474 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 06:42:53.800054 1639474 kubeadm.go:158] found existing configuration files:
	
	I1216 06:42:53.800109 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 06:42:53.808053 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 06:42:53.808117 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 06:42:53.815698 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 06:42:53.823700 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 06:42:53.823760 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 06:42:53.831721 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 06:42:53.840020 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 06:42:53.840081 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 06:42:53.848003 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 06:42:53.856083 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 06:42:53.856151 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 06:42:53.863882 1639474 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 06:42:53.905755 1639474 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 06:42:53.905814 1639474 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:42:53.975149 1639474 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 06:42:53.975215 1639474 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 06:42:53.975250 1639474 kubeadm.go:319] OS: Linux
	I1216 06:42:53.975294 1639474 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 06:42:53.975341 1639474 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 06:42:53.975388 1639474 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 06:42:53.975435 1639474 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 06:42:53.975482 1639474 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 06:42:53.975528 1639474 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 06:42:53.975572 1639474 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 06:42:53.975619 1639474 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 06:42:53.975663 1639474 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 06:42:54.043340 1639474 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:42:54.043458 1639474 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:42:54.043554 1639474 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:42:54.051413 1639474 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:42:54.053411 1639474 out.go:252]   - Generating certificates and keys ...
	I1216 06:42:54.053534 1639474 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:42:54.053635 1639474 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:42:54.053726 1639474 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 06:42:54.053790 1639474 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 06:42:54.053864 1639474 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 06:42:54.053921 1639474 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 06:42:54.054179 1639474 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 06:42:54.054243 1639474 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 06:42:54.054338 1639474 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 06:42:54.054707 1639474 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 06:42:54.054967 1639474 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 06:42:54.055037 1639474 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:42:54.157358 1639474 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:42:54.374409 1639474 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:42:54.451048 1639474 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:42:54.729890 1639474 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:42:55.123905 1639474 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:42:55.124705 1639474 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:42:55.129362 1639474 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:42:55.130938 1639474 out.go:252]   - Booting up control plane ...
	I1216 06:42:55.131069 1639474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:42:55.131195 1639474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:42:55.132057 1639474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:42:55.147012 1639474 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:42:55.147116 1639474 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:42:55.155648 1639474 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:42:55.155999 1639474 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:42:55.156106 1639474 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:42:55.287137 1639474 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:42:55.287251 1639474 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:46:55.288217 1639474 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001159637s
	I1216 06:46:55.288243 1639474 kubeadm.go:319] 
	I1216 06:46:55.288304 1639474 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 06:46:55.288336 1639474 kubeadm.go:319] 	- The kubelet is not running
	I1216 06:46:55.288440 1639474 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 06:46:55.288445 1639474 kubeadm.go:319] 
	I1216 06:46:55.288565 1639474 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 06:46:55.288597 1639474 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 06:46:55.288627 1639474 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 06:46:55.288630 1639474 kubeadm.go:319] 
	I1216 06:46:55.292707 1639474 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 06:46:55.293173 1639474 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 06:46:55.293300 1639474 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 06:46:55.293545 1639474 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 06:46:55.293552 1639474 kubeadm.go:319] 
	I1216 06:46:55.293641 1639474 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1216 06:46:55.293765 1639474 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001159637s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 06:46:55.293855 1639474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 06:46:55.704413 1639474 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 06:46:55.717800 1639474 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 06:46:55.717860 1639474 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 06:46:55.726221 1639474 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 06:46:55.726230 1639474 kubeadm.go:158] found existing configuration files:
	
	I1216 06:46:55.726283 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 06:46:55.734520 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 06:46:55.734578 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 06:46:55.742443 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 06:46:55.750333 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 06:46:55.750396 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 06:46:55.758306 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 06:46:55.766326 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 06:46:55.766405 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 06:46:55.774041 1639474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 06:46:55.782003 1639474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 06:46:55.782061 1639474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 06:46:55.789651 1639474 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 06:46:55.828645 1639474 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 06:46:55.828882 1639474 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:46:55.903247 1639474 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 06:46:55.903309 1639474 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 06:46:55.903344 1639474 kubeadm.go:319] OS: Linux
	I1216 06:46:55.903387 1639474 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 06:46:55.903435 1639474 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 06:46:55.903481 1639474 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 06:46:55.903528 1639474 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 06:46:55.903575 1639474 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 06:46:55.903627 1639474 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 06:46:55.903672 1639474 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 06:46:55.903719 1639474 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 06:46:55.903764 1639474 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 06:46:55.978404 1639474 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:46:55.978523 1639474 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:46:55.978635 1639474 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:46:55.988968 1639474 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:46:55.992562 1639474 out.go:252]   - Generating certificates and keys ...
	I1216 06:46:55.992651 1639474 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:46:55.992728 1639474 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:46:55.992809 1639474 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 06:46:55.992874 1639474 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 06:46:55.992948 1639474 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 06:46:55.993006 1639474 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 06:46:55.993073 1639474 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 06:46:55.993138 1639474 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 06:46:55.993217 1639474 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 06:46:55.993295 1639474 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 06:46:55.993334 1639474 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 06:46:55.993394 1639474 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:46:56.216895 1639474 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:46:56.479326 1639474 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:46:56.885081 1639474 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:46:57.284813 1639474 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:46:57.705019 1639474 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:46:57.705808 1639474 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:46:57.708929 1639474 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:46:57.712185 1639474 out.go:252]   - Booting up control plane ...
	I1216 06:46:57.712286 1639474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:46:57.712364 1639474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:46:57.713358 1639474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:46:57.728440 1639474 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:46:57.729026 1639474 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:46:57.736761 1639474 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:46:57.737279 1639474 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:46:57.737495 1639474 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:46:57.864121 1639474 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:46:57.864234 1639474 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:50:57.863911 1639474 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000152952s
	I1216 06:50:57.863934 1639474 kubeadm.go:319] 
	I1216 06:50:57.863990 1639474 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 06:50:57.864023 1639474 kubeadm.go:319] 	- The kubelet is not running
	I1216 06:50:57.864128 1639474 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 06:50:57.864133 1639474 kubeadm.go:319] 
	I1216 06:50:57.864236 1639474 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 06:50:57.864267 1639474 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 06:50:57.864298 1639474 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 06:50:57.864301 1639474 kubeadm.go:319] 
	I1216 06:50:57.868420 1639474 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 06:50:57.868920 1639474 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 06:50:57.869030 1639474 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 06:50:57.869291 1639474 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1216 06:50:57.869296 1639474 kubeadm.go:319] 
	I1216 06:50:57.869364 1639474 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 06:50:57.869421 1639474 kubeadm.go:403] duration metric: took 12m7.776167752s to StartCluster
	I1216 06:50:57.869453 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:50:57.869520 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:50:57.901135 1639474 cri.go:89] found id: ""
	I1216 06:50:57.901151 1639474 logs.go:282] 0 containers: []
	W1216 06:50:57.901158 1639474 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:50:57.901163 1639474 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:50:57.901226 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:50:57.925331 1639474 cri.go:89] found id: ""
	I1216 06:50:57.925345 1639474 logs.go:282] 0 containers: []
	W1216 06:50:57.925352 1639474 logs.go:284] No container was found matching "etcd"
	I1216 06:50:57.925357 1639474 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:50:57.925415 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:50:57.950341 1639474 cri.go:89] found id: ""
	I1216 06:50:57.950356 1639474 logs.go:282] 0 containers: []
	W1216 06:50:57.950363 1639474 logs.go:284] No container was found matching "coredns"
	I1216 06:50:57.950367 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:50:57.950426 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:50:57.975123 1639474 cri.go:89] found id: ""
	I1216 06:50:57.975137 1639474 logs.go:282] 0 containers: []
	W1216 06:50:57.975144 1639474 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:50:57.975149 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:50:57.975208 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:50:58.004659 1639474 cri.go:89] found id: ""
	I1216 06:50:58.004676 1639474 logs.go:282] 0 containers: []
	W1216 06:50:58.004684 1639474 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:50:58.004689 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:50:58.004760 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:50:58.030464 1639474 cri.go:89] found id: ""
	I1216 06:50:58.030478 1639474 logs.go:282] 0 containers: []
	W1216 06:50:58.030485 1639474 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:50:58.030491 1639474 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:50:58.030552 1639474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:50:58.056049 1639474 cri.go:89] found id: ""
	I1216 06:50:58.056063 1639474 logs.go:282] 0 containers: []
	W1216 06:50:58.056071 1639474 logs.go:284] No container was found matching "kindnet"
	I1216 06:50:58.056079 1639474 logs.go:123] Gathering logs for kubelet ...
	I1216 06:50:58.056091 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:50:58.124116 1639474 logs.go:123] Gathering logs for dmesg ...
	I1216 06:50:58.124137 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:50:58.139439 1639474 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:50:58.139455 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:50:58.229902 1639474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:50:58.220695   21068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:50:58.221180   21068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:50:58.222906   21068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:50:58.223593   21068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:50:58.225247   21068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:50:58.220695   21068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:50:58.221180   21068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:50:58.222906   21068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:50:58.223593   21068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:50:58.225247   21068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:50:58.229914 1639474 logs.go:123] Gathering logs for CRI-O ...
	I1216 06:50:58.229925 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 06:50:58.301956 1639474 logs.go:123] Gathering logs for container status ...
	I1216 06:50:58.301977 1639474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1216 06:50:58.330306 1639474 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000152952s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1216 06:50:58.330348 1639474 out.go:285] * 
	W1216 06:50:58.330448 1639474 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000152952s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 06:50:58.330506 1639474 out.go:285] * 
	W1216 06:50:58.332927 1639474 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 06:50:58.338210 1639474 out.go:203] 
	W1216 06:50:58.341028 1639474 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000152952s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 06:50:58.341164 1639474 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 06:50:58.341212 1639474 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 06:50:58.344413 1639474 out.go:203] 
	
	
	==> CRI-O <==
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553471769Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553507896Z" level=info msg="Starting seccomp notifier watcher"
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553554657Z" level=info msg="Create NRI interface"
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553657485Z" level=info msg="built-in NRI default validator is disabled"
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553665107Z" level=info msg="runtime interface created"
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553674699Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553680746Z" level=info msg="runtime interface starting up..."
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553686137Z" level=info msg="starting plugins..."
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553698814Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553771561Z" level=info msg="No systemd watchdog enabled"
	Dec 16 06:38:48 functional-364120 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 16 06:42:54 functional-364120 crio[9872]: time="2025-12-16T06:42:54.046654305Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=2afa36a7-e595-4e9e-9866-100014f74db0 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:42:54 functional-364120 crio[9872]: time="2025-12-16T06:42:54.047561496Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=bfee085e-d788-43aa-852e-e818968557f8 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:42:54 functional-364120 crio[9872]: time="2025-12-16T06:42:54.048165668Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=8209edd3-2ad3-4cea-9d15-760a1b94c10d name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:42:54 functional-364120 crio[9872]: time="2025-12-16T06:42:54.048839782Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=f38b3b25-171e-488b-9dbb-3a4615d07ce7 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:42:54 functional-364120 crio[9872]: time="2025-12-16T06:42:54.049385123Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=674d3a91-05c7-4375-a638-2bb51d77e82a name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:42:54 functional-364120 crio[9872]: time="2025-12-16T06:42:54.049934157Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=d7315967-45e5-4ab2-b579-15a88e3c5cf5 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:42:54 functional-364120 crio[9872]: time="2025-12-16T06:42:54.050441213Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=d2d27746-f739-4711-a521-d245b78e775c name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.981581513Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=cc27c34f-1129-41fd-83b5-8698b0697603 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.982462832Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=f632e983-ad57-48b2-98c3-8802e4b6bb91 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.982972654Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=4d99c7a4-52a2-4a4f-9569-9d8a29ee230d name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.983463866Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=824a4ba3-63ed-49ce-a194-3bf34f462483 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.983972891Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=52cc52f0-f1ca-4fc4-a91a-13dd8c19e754 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.984501125Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=baf81d2d-269c-44fd-a82c-811876adf596 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.984974015Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=88fe0e4e-4ea7-4b38-a635-f3138f370377 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:53:18.143992   22746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:53:18.144790   22746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:53:18.146251   22746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:53:18.146672   22746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:53:18.148123   22746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 06:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec16 06:13] overlayfs: idmapped layers are currently not supported
	[Dec16 06:19] overlayfs: idmapped layers are currently not supported
	[Dec16 06:20] overlayfs: idmapped layers are currently not supported
	[Dec16 06:38] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 06:53:18 up  9:35,  0 user,  load average: 0.21, 0.18, 0.41
	Linux functional-364120 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 06:53:15 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:53:16 functional-364120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1146.
	Dec 16 06:53:16 functional-364120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:53:16 functional-364120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:53:16 functional-364120 kubelet[22636]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:53:16 functional-364120 kubelet[22636]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:53:16 functional-364120 kubelet[22636]: E1216 06:53:16.194277   22636 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:53:16 functional-364120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:53:16 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:53:16 functional-364120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1147.
	Dec 16 06:53:16 functional-364120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:53:16 functional-364120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:53:16 functional-364120 kubelet[22642]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:53:16 functional-364120 kubelet[22642]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:53:16 functional-364120 kubelet[22642]: E1216 06:53:16.954310   22642 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:53:16 functional-364120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:53:16 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:53:17 functional-364120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1148.
	Dec 16 06:53:17 functional-364120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:53:17 functional-364120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:53:17 functional-364120 kubelet[22663]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:53:17 functional-364120 kubelet[22663]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:53:17 functional-364120 kubelet[22663]: E1216 06:53:17.730729   22663 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:53:17 functional-364120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:53:17 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-364120 -n functional-364120
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-364120 -n functional-364120: exit status 2 (343.458072ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-364120" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.66s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1216 06:51:16.729686 1599255 retry.go:31] will retry after 3.816721929s: Temporary Error: Get "http://10.100.172.184": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1216 06:51:30.547496 1599255 retry.go:31] will retry after 4.053195144s: Temporary Error: Get "http://10.100.172.184": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1216 06:51:44.601323 1599255 retry.go:31] will retry after 5.706203389s: Temporary Error: Get "http://10.100.172.184": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1216 06:52:00.309831 1599255 retry.go:31] will retry after 12.99578855s: Temporary Error: Get "http://10.100.172.184": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1216 06:52:23.306976 1599255 retry.go:31] will retry after 17.704128686s: Temporary Error: Get "http://10.100.172.184": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1216 06:52:51.011467 1599255 retry.go:31] will retry after 15.391638749s: Temporary Error: Get "http://10.100.172.184": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1216 06:53:08.325492 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-487532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1216 06:54:09.891927 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-364120 -n functional-364120
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-364120 -n functional-364120: exit status 2 (314.170702ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-364120" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-364120
helpers_test.go:244: (dbg) docker inspect functional-364120:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf",
	        "Created": "2025-12-16T06:24:05.281524036Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1628059,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T06:24:05.346294886Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/hostname",
	        "HostsPath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/hosts",
	        "LogPath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf-json.log",
	        "Name": "/functional-364120",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-364120:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-364120",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf",
	                "LowerDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752-init/diff:/var/lib/docker/overlay2/bf9e5e3f04a34ae52d17b5e81aeacb3854428b2bda7b4fcb7e1d86558db759ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752/merged",
	                "UpperDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752/diff",
	                "WorkDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-364120",
	                "Source": "/var/lib/docker/volumes/functional-364120/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-364120",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-364120",
	                "name.minikube.sigs.k8s.io": "functional-364120",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ca8e444af5ea4dc220aae407b23205e89ee2c7bfaf0d7da28c0fa8a6e9438a0b",
	            "SandboxKey": "/var/run/docker/netns/ca8e444af5ea",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34260"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34261"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34264"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34262"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34263"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-364120": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:28:ec:c3:f0:f5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a6847428577f52c75d7f6ab7a92b3395c1204da1608971d5af98d3898a2210da",
	                    "EndpointID": "e579fd8a0ba117da836073d37b7f617933568bedfc3fb52e056b4772aaddecbf",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-364120",
	                        "8e0dcfb5d015"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-364120 -n functional-364120
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-364120 -n functional-364120: exit status 2 (292.375758ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-364120 image load --daemon kicbase/echo-server:functional-364120 --alsologtostderr                                                             │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ image          │ functional-364120 image ls                                                                                                                                │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ image          │ functional-364120 image save kicbase/echo-server:functional-364120 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ image          │ functional-364120 image rm kicbase/echo-server:functional-364120 --alsologtostderr                                                                        │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ image          │ functional-364120 image ls                                                                                                                                │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ image          │ functional-364120 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ image          │ functional-364120 image ls                                                                                                                                │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ image          │ functional-364120 image save --daemon kicbase/echo-server:functional-364120 --alsologtostderr                                                             │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ ssh            │ functional-364120 ssh sudo cat /etc/test/nested/copy/1599255/hosts                                                                                        │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ ssh            │ functional-364120 ssh sudo cat /etc/ssl/certs/1599255.pem                                                                                                 │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ ssh            │ functional-364120 ssh sudo cat /usr/share/ca-certificates/1599255.pem                                                                                     │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ ssh            │ functional-364120 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ ssh            │ functional-364120 ssh sudo cat /etc/ssl/certs/15992552.pem                                                                                                │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ ssh            │ functional-364120 ssh sudo cat /usr/share/ca-certificates/15992552.pem                                                                                    │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ ssh            │ functional-364120 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ image          │ functional-364120 image ls --format short --alsologtostderr                                                                                               │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ update-context │ functional-364120 update-context --alsologtostderr -v=2                                                                                                   │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ ssh            │ functional-364120 ssh pgrep buildkitd                                                                                                                     │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ image          │ functional-364120 image build -t localhost/my-image:functional-364120 testdata/build --alsologtostderr                                                    │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ image          │ functional-364120 image ls                                                                                                                                │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ image          │ functional-364120 image ls --format yaml --alsologtostderr                                                                                                │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ image          │ functional-364120 image ls --format json --alsologtostderr                                                                                                │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ image          │ functional-364120 image ls --format table --alsologtostderr                                                                                               │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ update-context │ functional-364120 update-context --alsologtostderr -v=2                                                                                                   │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ update-context │ functional-364120 update-context --alsologtostderr -v=2                                                                                                   │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 06:53:33
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 06:53:33.621593 1657034 out.go:360] Setting OutFile to fd 1 ...
	I1216 06:53:33.621738 1657034 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:53:33.621751 1657034 out.go:374] Setting ErrFile to fd 2...
	I1216 06:53:33.621757 1657034 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:53:33.622008 1657034 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 06:53:33.622392 1657034 out.go:368] Setting JSON to false
	I1216 06:53:33.623268 1657034 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":34565,"bootTime":1765833449,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1216 06:53:33.623334 1657034 start.go:143] virtualization:  
	I1216 06:53:33.626568 1657034 out.go:179] * [functional-364120] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 06:53:33.630377 1657034 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 06:53:33.630472 1657034 notify.go:221] Checking for updates...
	I1216 06:53:33.636109 1657034 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 06:53:33.639064 1657034 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:53:33.641980 1657034 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube
	I1216 06:53:33.644866 1657034 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 06:53:33.647809 1657034 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 06:53:33.651292 1657034 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 06:53:33.651909 1657034 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 06:53:33.675417 1657034 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 06:53:33.675545 1657034 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:53:33.738351 1657034 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 06:53:33.728990657 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 06:53:33.738463 1657034 docker.go:319] overlay module found
	I1216 06:53:33.741436 1657034 out.go:179] * Using the docker driver based on existing profile
	I1216 06:53:33.744303 1657034 start.go:309] selected driver: docker
	I1216 06:53:33.744333 1657034 start.go:927] validating driver "docker" against &{Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:53:33.744446 1657034 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 06:53:33.744631 1657034 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:53:33.805595 1657034 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 06:53:33.791477969 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 06:53:33.806060 1657034 cni.go:84] Creating CNI manager for ""
	I1216 06:53:33.806123 1657034 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 06:53:33.806164 1657034 start.go:353] cluster config:
	{Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:53:33.809236 1657034 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.981581513Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=cc27c34f-1129-41fd-83b5-8698b0697603 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.982462832Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=f632e983-ad57-48b2-98c3-8802e4b6bb91 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.982972654Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=4d99c7a4-52a2-4a4f-9569-9d8a29ee230d name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.983463866Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=824a4ba3-63ed-49ce-a194-3bf34f462483 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.983972891Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=52cc52f0-f1ca-4fc4-a91a-13dd8c19e754 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.984501125Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=baf81d2d-269c-44fd-a82c-811876adf596 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.984974015Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=88fe0e4e-4ea7-4b38-a635-f3138f370377 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:53:38 functional-364120 crio[9872]: time="2025-12-16T06:53:38.563179317Z" level=info msg="Checking image status: kicbase/echo-server:functional-364120" id=fa75329b-bb50-4a96-b9ba-c9cd0d8bc339 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:53:38 functional-364120 crio[9872]: time="2025-12-16T06:53:38.563357657Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 16 06:53:38 functional-364120 crio[9872]: time="2025-12-16T06:53:38.563401481Z" level=info msg="Image kicbase/echo-server:functional-364120 not found" id=fa75329b-bb50-4a96-b9ba-c9cd0d8bc339 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:53:38 functional-364120 crio[9872]: time="2025-12-16T06:53:38.56346416Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-364120 found" id=fa75329b-bb50-4a96-b9ba-c9cd0d8bc339 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:53:38 functional-364120 crio[9872]: time="2025-12-16T06:53:38.586248519Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-364120" id=abab53c5-fa61-4fda-8728-82aa5b903a51 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:53:38 functional-364120 crio[9872]: time="2025-12-16T06:53:38.586423552Z" level=info msg="Image docker.io/kicbase/echo-server:functional-364120 not found" id=abab53c5-fa61-4fda-8728-82aa5b903a51 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:53:38 functional-364120 crio[9872]: time="2025-12-16T06:53:38.586484106Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-364120 found" id=abab53c5-fa61-4fda-8728-82aa5b903a51 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:53:38 functional-364120 crio[9872]: time="2025-12-16T06:53:38.614003749Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-364120" id=97c430cb-f0f1-4d6a-8416-aece39715f23 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:53:38 functional-364120 crio[9872]: time="2025-12-16T06:53:38.614130511Z" level=info msg="Image localhost/kicbase/echo-server:functional-364120 not found" id=97c430cb-f0f1-4d6a-8416-aece39715f23 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:53:38 functional-364120 crio[9872]: time="2025-12-16T06:53:38.614165834Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-364120 found" id=97c430cb-f0f1-4d6a-8416-aece39715f23 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:53:41 functional-364120 crio[9872]: time="2025-12-16T06:53:41.573546291Z" level=info msg="Checking image status: kicbase/echo-server:functional-364120" id=6162b9d6-30e0-4e38-a19b-183c207b9c6d name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:53:41 functional-364120 crio[9872]: time="2025-12-16T06:53:41.573724811Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 16 06:53:41 functional-364120 crio[9872]: time="2025-12-16T06:53:41.573783035Z" level=info msg="Image kicbase/echo-server:functional-364120 not found" id=6162b9d6-30e0-4e38-a19b-183c207b9c6d name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:53:41 functional-364120 crio[9872]: time="2025-12-16T06:53:41.573862273Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-364120 found" id=6162b9d6-30e0-4e38-a19b-183c207b9c6d name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:53:41 functional-364120 crio[9872]: time="2025-12-16T06:53:41.599347027Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-364120" id=aa496da6-1320-4f49-b641-180c3838085b name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:53:41 functional-364120 crio[9872]: time="2025-12-16T06:53:41.599506757Z" level=info msg="Image docker.io/kicbase/echo-server:functional-364120 not found" id=aa496da6-1320-4f49-b641-180c3838085b name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:53:41 functional-364120 crio[9872]: time="2025-12-16T06:53:41.599562265Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-364120 found" id=aa496da6-1320-4f49-b641-180c3838085b name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:53:41 functional-364120 crio[9872]: time="2025-12-16T06:53:41.622595742Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-364120" id=571e6193-f5d6-49c3-9ec5-71e49bc0d329 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:55:08.322033   25350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:55:08.322445   25350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:55:08.323924   25350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:55:08.324270   25350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:55:08.325812   25350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 06:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec16 06:13] overlayfs: idmapped layers are currently not supported
	[Dec16 06:19] overlayfs: idmapped layers are currently not supported
	[Dec16 06:20] overlayfs: idmapped layers are currently not supported
	[Dec16 06:38] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 06:55:08 up  9:37,  0 user,  load average: 0.23, 0.26, 0.41
	Linux functional-364120 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 06:55:05 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:55:06 functional-364120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1293.
	Dec 16 06:55:06 functional-364120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:55:06 functional-364120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:55:06 functional-364120 kubelet[25223]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:55:06 functional-364120 kubelet[25223]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:55:06 functional-364120 kubelet[25223]: E1216 06:55:06.442930   25223 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:55:06 functional-364120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:55:06 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:55:07 functional-364120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1294.
	Dec 16 06:55:07 functional-364120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:55:07 functional-364120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:55:07 functional-364120 kubelet[25228]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:55:07 functional-364120 kubelet[25228]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:55:07 functional-364120 kubelet[25228]: E1216 06:55:07.209630   25228 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:55:07 functional-364120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:55:07 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:55:07 functional-364120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1295.
	Dec 16 06:55:07 functional-364120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:55:07 functional-364120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:55:07 functional-364120 kubelet[25265]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:55:07 functional-364120 kubelet[25265]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:55:07 functional-364120 kubelet[25265]: E1216 06:55:07.961404   25265 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:55:07 functional-364120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:55:07 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-364120 -n functional-364120
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-364120 -n functional-364120: exit status 2 (338.196927ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-364120" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.66s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-364120 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-364120 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (67.765312ms)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-364120 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-364120
helpers_test.go:244: (dbg) docker inspect functional-364120:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf",
	        "Created": "2025-12-16T06:24:05.281524036Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1628059,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T06:24:05.346294886Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/hostname",
	        "HostsPath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/hosts",
	        "LogPath": "/var/lib/docker/containers/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf/8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf-json.log",
	        "Name": "/functional-364120",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-364120:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-364120",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8e0dcfb5d0158b962b0d945494e0a3636f2da9d368e4019f2a9b936e350e1ddf",
	                "LowerDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752-init/diff:/var/lib/docker/overlay2/bf9e5e3f04a34ae52d17b5e81aeacb3854428b2bda7b4fcb7e1d86558db759ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752/merged",
	                "UpperDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752/diff",
	                "WorkDir": "/var/lib/docker/overlay2/12074d5315598eb4603dee3f15e2733877a36602bed3464b5f81d77464900752/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-364120",
	                "Source": "/var/lib/docker/volumes/functional-364120/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-364120",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-364120",
	                "name.minikube.sigs.k8s.io": "functional-364120",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ca8e444af5ea4dc220aae407b23205e89ee2c7bfaf0d7da28c0fa8a6e9438a0b",
	            "SandboxKey": "/var/run/docker/netns/ca8e444af5ea",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34260"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34261"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34264"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34262"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34263"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-364120": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:28:ec:c3:f0:f5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a6847428577f52c75d7f6ab7a92b3395c1204da1608971d5af98d3898a2210da",
	                    "EndpointID": "e579fd8a0ba117da836073d37b7f617933568bedfc3fb52e056b4772aaddecbf",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-364120",
	                        "8e0dcfb5d015"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-364120 -n functional-364120
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-364120 -n functional-364120: exit status 2 (315.2389ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service   │ functional-364120 service hello-node --url                                                                                                          │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ ssh       │ functional-364120 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ mount     │ -p functional-364120 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2352872432/001:/mount-9p --alsologtostderr -v=1              │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ ssh       │ functional-364120 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ ssh       │ functional-364120 ssh -- ls -la /mount-9p                                                                                                           │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ ssh       │ functional-364120 ssh cat /mount-9p/test-1765868003827322411                                                                                        │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ ssh       │ functional-364120 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ ssh       │ functional-364120 ssh sudo umount -f /mount-9p                                                                                                      │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ ssh       │ functional-364120 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ mount     │ -p functional-364120 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3205506699/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ ssh       │ functional-364120 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ ssh       │ functional-364120 ssh -- ls -la /mount-9p                                                                                                           │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ ssh       │ functional-364120 ssh sudo umount -f /mount-9p                                                                                                      │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ ssh       │ functional-364120 ssh findmnt -T /mount1                                                                                                            │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ mount     │ -p functional-364120 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1494651641/001:/mount1 --alsologtostderr -v=1                │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ mount     │ -p functional-364120 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1494651641/001:/mount2 --alsologtostderr -v=1                │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ mount     │ -p functional-364120 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1494651641/001:/mount3 --alsologtostderr -v=1                │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ ssh       │ functional-364120 ssh findmnt -T /mount1                                                                                                            │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ ssh       │ functional-364120 ssh findmnt -T /mount2                                                                                                            │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ ssh       │ functional-364120 ssh findmnt -T /mount3                                                                                                            │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │ 16 Dec 25 06:53 UTC │
	│ mount     │ -p functional-364120 --kill=true                                                                                                                    │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ start     │ -p functional-364120 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ start     │ -p functional-364120 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ start     │ -p functional-364120 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                 │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-364120 --alsologtostderr -v=1                                                                                      │ functional-364120 │ jenkins │ v1.37.0 │ 16 Dec 25 06:53 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 06:53:33
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 06:53:33.621593 1657034 out.go:360] Setting OutFile to fd 1 ...
	I1216 06:53:33.621738 1657034 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:53:33.621751 1657034 out.go:374] Setting ErrFile to fd 2...
	I1216 06:53:33.621757 1657034 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:53:33.622008 1657034 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 06:53:33.622392 1657034 out.go:368] Setting JSON to false
	I1216 06:53:33.623268 1657034 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":34565,"bootTime":1765833449,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1216 06:53:33.623334 1657034 start.go:143] virtualization:  
	I1216 06:53:33.626568 1657034 out.go:179] * [functional-364120] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 06:53:33.630377 1657034 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 06:53:33.630472 1657034 notify.go:221] Checking for updates...
	I1216 06:53:33.636109 1657034 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 06:53:33.639064 1657034 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:53:33.641980 1657034 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube
	I1216 06:53:33.644866 1657034 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 06:53:33.647809 1657034 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 06:53:33.651292 1657034 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 06:53:33.651909 1657034 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 06:53:33.675417 1657034 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 06:53:33.675545 1657034 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:53:33.738351 1657034 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 06:53:33.728990657 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 06:53:33.738463 1657034 docker.go:319] overlay module found
	I1216 06:53:33.741436 1657034 out.go:179] * Using the docker driver based on existing profile
	I1216 06:53:33.744303 1657034 start.go:309] selected driver: docker
	I1216 06:53:33.744333 1657034 start.go:927] validating driver "docker" against &{Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:53:33.744446 1657034 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 06:53:33.744631 1657034 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:53:33.805595 1657034 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 06:53:33.791477969 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 06:53:33.806060 1657034 cni.go:84] Creating CNI manager for ""
	I1216 06:53:33.806123 1657034 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 06:53:33.806164 1657034 start.go:353] cluster config:
	{Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:53:33.809236 1657034 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553471769Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553507896Z" level=info msg="Starting seccomp notifier watcher"
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553554657Z" level=info msg="Create NRI interface"
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553657485Z" level=info msg="built-in NRI default validator is disabled"
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553665107Z" level=info msg="runtime interface created"
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553674699Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553680746Z" level=info msg="runtime interface starting up..."
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553686137Z" level=info msg="starting plugins..."
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553698814Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 16 06:38:48 functional-364120 crio[9872]: time="2025-12-16T06:38:48.553771561Z" level=info msg="No systemd watchdog enabled"
	Dec 16 06:38:48 functional-364120 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 16 06:42:54 functional-364120 crio[9872]: time="2025-12-16T06:42:54.046654305Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=2afa36a7-e595-4e9e-9866-100014f74db0 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:42:54 functional-364120 crio[9872]: time="2025-12-16T06:42:54.047561496Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=bfee085e-d788-43aa-852e-e818968557f8 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:42:54 functional-364120 crio[9872]: time="2025-12-16T06:42:54.048165668Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=8209edd3-2ad3-4cea-9d15-760a1b94c10d name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:42:54 functional-364120 crio[9872]: time="2025-12-16T06:42:54.048839782Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=f38b3b25-171e-488b-9dbb-3a4615d07ce7 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:42:54 functional-364120 crio[9872]: time="2025-12-16T06:42:54.049385123Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=674d3a91-05c7-4375-a638-2bb51d77e82a name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:42:54 functional-364120 crio[9872]: time="2025-12-16T06:42:54.049934157Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=d7315967-45e5-4ab2-b579-15a88e3c5cf5 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:42:54 functional-364120 crio[9872]: time="2025-12-16T06:42:54.050441213Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=d2d27746-f739-4711-a521-d245b78e775c name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.981581513Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=cc27c34f-1129-41fd-83b5-8698b0697603 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.982462832Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=f632e983-ad57-48b2-98c3-8802e4b6bb91 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.982972654Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=4d99c7a4-52a2-4a4f-9569-9d8a29ee230d name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.983463866Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=824a4ba3-63ed-49ce-a194-3bf34f462483 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.983972891Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=52cc52f0-f1ca-4fc4-a91a-13dd8c19e754 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.984501125Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=baf81d2d-269c-44fd-a82c-811876adf596 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 06:46:55 functional-364120 crio[9872]: time="2025-12-16T06:46:55.984974015Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=88fe0e4e-4ea7-4b38-a635-f3138f370377 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:53:36.523214   23757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:53:36.523659   23757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:53:36.525570   23757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:53:36.525933   23757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 06:53:36.527459   23757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 06:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec16 06:13] overlayfs: idmapped layers are currently not supported
	[Dec16 06:19] overlayfs: idmapped layers are currently not supported
	[Dec16 06:20] overlayfs: idmapped layers are currently not supported
	[Dec16 06:38] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 06:53:36 up  9:36,  0 user,  load average: 0.55, 0.25, 0.43
	Linux functional-364120 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 06:53:34 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:53:34 functional-364120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1171.
	Dec 16 06:53:34 functional-364120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:53:34 functional-364120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:53:34 functional-364120 kubelet[23587]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:53:34 functional-364120 kubelet[23587]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:53:34 functional-364120 kubelet[23587]: E1216 06:53:34.971940   23587 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:53:34 functional-364120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:53:34 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:53:35 functional-364120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1172.
	Dec 16 06:53:35 functional-364120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:53:35 functional-364120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:53:35 functional-364120 kubelet[23654]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:53:35 functional-364120 kubelet[23654]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:53:35 functional-364120 kubelet[23654]: E1216 06:53:35.710278   23654 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:53:35 functional-364120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:53:35 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:53:36 functional-364120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1173.
	Dec 16 06:53:36 functional-364120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:53:36 functional-364120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:53:36 functional-364120 kubelet[23740]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:53:36 functional-364120 kubelet[23740]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 06:53:36 functional-364120 kubelet[23740]: E1216 06:53:36.451372   23740 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:53:36 functional-364120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:53:36 functional-364120 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-364120 -n functional-364120
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-364120 -n functional-364120: exit status 2 (302.095343ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-364120" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-364120 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-364120 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1216 06:51:06.215180 1652583 out.go:360] Setting OutFile to fd 1 ...
I1216 06:51:06.215415 1652583 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 06:51:06.215445 1652583 out.go:374] Setting ErrFile to fd 2...
I1216 06:51:06.215463 1652583 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 06:51:06.215765 1652583 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
I1216 06:51:06.216165 1652583 mustload.go:66] Loading cluster: functional-364120
I1216 06:51:06.216668 1652583 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 06:51:06.217204 1652583 cli_runner.go:164] Run: docker container inspect functional-364120 --format={{.State.Status}}
I1216 06:51:06.247735 1652583 host.go:66] Checking if "functional-364120" exists ...
I1216 06:51:06.248048 1652583 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1216 06:51:06.358333 1652583 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 06:51:06.339380029 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1216 06:51:06.358451 1652583 api_server.go:166] Checking apiserver status ...
I1216 06:51:06.358512 1652583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1216 06:51:06.358552 1652583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
I1216 06:51:06.420754 1652583 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
W1216 06:51:06.544003 1652583 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1216 06:51:06.547383 1652583 out.go:179] * The control-plane node functional-364120 apiserver is not running: (state=Stopped)
I1216 06:51:06.550343 1652583 out.go:179]   To start a cluster, run: "minikube start -p functional-364120"

                                                
                                                
stdout: * The control-plane node functional-364120 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-364120"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-364120 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-364120 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-364120 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-364120 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 1652582: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-364120 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-364120 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-364120 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-364120 apply -f testdata/testsvc.yaml: exit status 1 (59.436514ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-364120 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (129.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.100.172.184": Temporary Error: Get "http://10.100.172.184": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-364120 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-364120 get svc nginx-svc: exit status 1 (60.144097ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-364120 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (129.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-364120 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-364120 create deployment hello-node --image kicbase/echo-server: exit status 1 (55.18765ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-364120 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-364120 service list: exit status 103 (280.863821ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-364120 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-364120"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-arm64 -p functional-364120 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-364120 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-364120\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-364120 service list -o json: exit status 103 (256.059ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-364120 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-364120"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-arm64 -p functional-364120 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-364120 service --namespace=default --https --url hello-node: exit status 103 (254.676252ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-364120 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-364120"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-364120 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-364120 service hello-node --url --format={{.IP}}: exit status 103 (271.392007ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-364120 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-364120"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-364120 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-364120 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-364120\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-364120 service hello-node --url: exit status 103 (244.674743ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-364120 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-364120"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-364120 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-364120 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-364120"
functional_test.go:1579: failed to parse "* The control-plane node functional-364120 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-364120\"": parse "* The control-plane node functional-364120 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-364120\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-364120 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2352872432/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765868003827322411" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2352872432/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765868003827322411" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2352872432/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765868003827322411" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2352872432/001/test-1765868003827322411
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-364120 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (353.140803ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 06:53:24.180740 1599255 retry.go:31] will retry after 607.962611ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 16 06:53 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 16 06:53 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 16 06:53 test-1765868003827322411
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 ssh cat /mount-9p/test-1765868003827322411
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-364120 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-364120 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (62.580621ms)

                                                
                                                
** stderr ** 
	error: error when deleting "testdata/busybox-mount-test.yaml": Delete "https://192.168.49.2:8441/api/v1/namespaces/default/pods/busybox-mount": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-364120 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-364120 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (270.653278ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=38167)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec 16 06:53 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec 16 06:53 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec 16 06:53 test-1765868003827322411
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-arm64 -p functional-364120 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-364120 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2352872432/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-364120 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2352872432/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2352872432/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:38167
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2352872432/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-364120 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2352872432/001:/mount-9p --alsologtostderr -v=1] stderr:
I1216 06:53:23.901627 1655096 out.go:360] Setting OutFile to fd 1 ...
I1216 06:53:23.901856 1655096 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 06:53:23.901876 1655096 out.go:374] Setting ErrFile to fd 2...
I1216 06:53:23.901892 1655096 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 06:53:23.902167 1655096 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
I1216 06:53:23.902456 1655096 mustload.go:66] Loading cluster: functional-364120
I1216 06:53:23.902855 1655096 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 06:53:23.903452 1655096 cli_runner.go:164] Run: docker container inspect functional-364120 --format={{.State.Status}}
I1216 06:53:23.920914 1655096 host.go:66] Checking if "functional-364120" exists ...
I1216 06:53:23.921234 1655096 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1216 06:53:24.019414 1655096 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-16 06:53:24.008019525 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1216 06:53:24.019581 1655096 cli_runner.go:164] Run: docker network inspect functional-364120 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1216 06:53:24.050183 1655096 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2352872432/001 into VM as /mount-9p ...
I1216 06:53:24.053482 1655096 out.go:179]   - Mount type:   9p
I1216 06:53:24.056463 1655096 out.go:179]   - User ID:      docker
I1216 06:53:24.059550 1655096 out.go:179]   - Group ID:     docker
I1216 06:53:24.062598 1655096 out.go:179]   - Version:      9p2000.L
I1216 06:53:24.065557 1655096 out.go:179]   - Message Size: 262144
I1216 06:53:24.072377 1655096 out.go:179]   - Options:      map[]
I1216 06:53:24.075377 1655096 out.go:179]   - Bind Address: 192.168.49.1:38167
I1216 06:53:24.078360 1655096 out.go:179] * Userspace file server: 
I1216 06:53:24.078695 1655096 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1216 06:53:24.078791 1655096 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
I1216 06:53:24.100011 1655096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
I1216 06:53:24.195338 1655096 mount.go:180] unmount for /mount-9p ran successfully
I1216 06:53:24.195369 1655096 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1216 06:53:24.203816 1655096 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=38167,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1216 06:53:24.214831 1655096 main.go:127] stdlog: ufs.go:141 connected
I1216 06:53:24.214999 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tversion tag 65535 msize 262144 version '9P2000.L'
I1216 06:53:24.215042 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rversion tag 65535 msize 262144 version '9P2000'
I1216 06:53:24.215299 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1216 06:53:24.215356 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rattach tag 0 aqid (4431f 25ef61ee 'd')
I1216 06:53:24.216582 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tstat tag 0 fid 0
I1216 06:53:24.216663 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (4431f 25ef61ee 'd') m d775 at 0 mt 1765868003 l 4096 t 0 d 0 ext )
I1216 06:53:24.224930 1655096 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/.mount-process: {Name:mke844fbeb5523a1c871f2730c8f210e361eb3e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 06:53:24.225146 1655096 mount.go:105] mount successful: ""
I1216 06:53:24.228647 1655096 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2352872432/001 to /mount-9p
I1216 06:53:24.231467 1655096 out.go:203] 
I1216 06:53:24.234230 1655096 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1216 06:53:25.333735 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tstat tag 0 fid 0
I1216 06:53:25.333812 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (4431f 25ef61ee 'd') m d775 at 0 mt 1765868003 l 4096 t 0 d 0 ext )
I1216 06:53:25.334202 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Twalk tag 0 fid 0 newfid 1 
I1216 06:53:25.334237 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rwalk tag 0 
I1216 06:53:25.334355 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Topen tag 0 fid 1 mode 0
I1216 06:53:25.334401 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Ropen tag 0 qid (4431f 25ef61ee 'd') iounit 0
I1216 06:53:25.334531 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tstat tag 0 fid 0
I1216 06:53:25.334568 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (4431f 25ef61ee 'd') m d775 at 0 mt 1765868003 l 4096 t 0 d 0 ext )
I1216 06:53:25.334722 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tread tag 0 fid 1 offset 0 count 262120
I1216 06:53:25.334841 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rread tag 0 count 258
I1216 06:53:25.334975 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tread tag 0 fid 1 offset 258 count 261862
I1216 06:53:25.335003 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rread tag 0 count 0
I1216 06:53:25.335118 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tread tag 0 fid 1 offset 258 count 262120
I1216 06:53:25.335143 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rread tag 0 count 0
I1216 06:53:25.335278 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1216 06:53:25.335312 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rwalk tag 0 (44322 25ef61ee '') 
I1216 06:53:25.335436 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tstat tag 0 fid 2
I1216 06:53:25.335473 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (44322 25ef61ee '') m 644 at 0 mt 1765868003 l 24 t 0 d 0 ext )
I1216 06:53:25.335597 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tstat tag 0 fid 2
I1216 06:53:25.335630 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (44322 25ef61ee '') m 644 at 0 mt 1765868003 l 24 t 0 d 0 ext )
I1216 06:53:25.335765 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tclunk tag 0 fid 2
I1216 06:53:25.335790 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rclunk tag 0
I1216 06:53:25.335915 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Twalk tag 0 fid 0 newfid 2 0:'test-1765868003827322411' 
I1216 06:53:25.335949 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rwalk tag 0 (4433e 25ef61ee '') 
I1216 06:53:25.336072 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tstat tag 0 fid 2
I1216 06:53:25.336115 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rstat tag 0 st ('test-1765868003827322411' 'jenkins' 'jenkins' '' q (4433e 25ef61ee '') m 644 at 0 mt 1765868003 l 24 t 0 d 0 ext )
I1216 06:53:25.336269 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tstat tag 0 fid 2
I1216 06:53:25.336305 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rstat tag 0 st ('test-1765868003827322411' 'jenkins' 'jenkins' '' q (4433e 25ef61ee '') m 644 at 0 mt 1765868003 l 24 t 0 d 0 ext )
I1216 06:53:25.336441 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tclunk tag 0 fid 2
I1216 06:53:25.336482 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rclunk tag 0
I1216 06:53:25.336633 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1216 06:53:25.336677 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rwalk tag 0 (44324 25ef61ee '') 
I1216 06:53:25.336805 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tstat tag 0 fid 2
I1216 06:53:25.336838 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (44324 25ef61ee '') m 644 at 0 mt 1765868003 l 24 t 0 d 0 ext )
I1216 06:53:25.336972 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tstat tag 0 fid 2
I1216 06:53:25.337024 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (44324 25ef61ee '') m 644 at 0 mt 1765868003 l 24 t 0 d 0 ext )
I1216 06:53:25.337137 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tclunk tag 0 fid 2
I1216 06:53:25.337159 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rclunk tag 0
I1216 06:53:25.337273 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tread tag 0 fid 1 offset 258 count 262120
I1216 06:53:25.337309 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rread tag 0 count 0
I1216 06:53:25.337443 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tclunk tag 0 fid 1
I1216 06:53:25.337472 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rclunk tag 0
I1216 06:53:25.596863 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Twalk tag 0 fid 0 newfid 1 0:'test-1765868003827322411' 
I1216 06:53:25.596937 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rwalk tag 0 (4433e 25ef61ee '') 
I1216 06:53:25.597118 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tstat tag 0 fid 1
I1216 06:53:25.597166 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rstat tag 0 st ('test-1765868003827322411' 'jenkins' 'jenkins' '' q (4433e 25ef61ee '') m 644 at 0 mt 1765868003 l 24 t 0 d 0 ext )
I1216 06:53:25.597312 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Twalk tag 0 fid 1 newfid 2 
I1216 06:53:25.597346 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rwalk tag 0 
I1216 06:53:25.597474 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Topen tag 0 fid 2 mode 0
I1216 06:53:25.597525 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Ropen tag 0 qid (4433e 25ef61ee '') iounit 0
I1216 06:53:25.597663 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tstat tag 0 fid 1
I1216 06:53:25.597701 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rstat tag 0 st ('test-1765868003827322411' 'jenkins' 'jenkins' '' q (4433e 25ef61ee '') m 644 at 0 mt 1765868003 l 24 t 0 d 0 ext )
I1216 06:53:25.597852 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tread tag 0 fid 2 offset 0 count 262120
I1216 06:53:25.597897 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rread tag 0 count 24
I1216 06:53:25.598012 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tread tag 0 fid 2 offset 24 count 262120
I1216 06:53:25.598046 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rread tag 0 count 0
I1216 06:53:25.598208 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tread tag 0 fid 2 offset 24 count 262120
I1216 06:53:25.598258 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rread tag 0 count 0
I1216 06:53:25.598483 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tclunk tag 0 fid 2
I1216 06:53:25.598532 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rclunk tag 0
I1216 06:53:25.598694 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tclunk tag 0 fid 1
I1216 06:53:25.598727 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rclunk tag 0
I1216 06:53:25.933587 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tstat tag 0 fid 0
I1216 06:53:25.933677 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (4431f 25ef61ee 'd') m d775 at 0 mt 1765868003 l 4096 t 0 d 0 ext )
I1216 06:53:25.934028 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Twalk tag 0 fid 0 newfid 1 
I1216 06:53:25.934103 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rwalk tag 0 
I1216 06:53:25.936656 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Topen tag 0 fid 1 mode 0
I1216 06:53:25.936754 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Ropen tag 0 qid (4431f 25ef61ee 'd') iounit 0
I1216 06:53:25.936922 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tstat tag 0 fid 0
I1216 06:53:25.936992 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (4431f 25ef61ee 'd') m d775 at 0 mt 1765868003 l 4096 t 0 d 0 ext )
I1216 06:53:25.937173 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tread tag 0 fid 1 offset 0 count 262120
I1216 06:53:25.937306 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rread tag 0 count 258
I1216 06:53:25.937472 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tread tag 0 fid 1 offset 258 count 261862
I1216 06:53:25.937508 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rread tag 0 count 0
I1216 06:53:25.937638 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tread tag 0 fid 1 offset 258 count 262120
I1216 06:53:25.937688 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rread tag 0 count 0
I1216 06:53:25.937840 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1216 06:53:25.937877 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rwalk tag 0 (44322 25ef61ee '') 
I1216 06:53:25.937987 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tstat tag 0 fid 2
I1216 06:53:25.938033 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (44322 25ef61ee '') m 644 at 0 mt 1765868003 l 24 t 0 d 0 ext )
I1216 06:53:25.938150 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tstat tag 0 fid 2
I1216 06:53:25.938183 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (44322 25ef61ee '') m 644 at 0 mt 1765868003 l 24 t 0 d 0 ext )
I1216 06:53:25.938301 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tclunk tag 0 fid 2
I1216 06:53:25.938325 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rclunk tag 0
I1216 06:53:25.938460 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Twalk tag 0 fid 0 newfid 2 0:'test-1765868003827322411' 
I1216 06:53:25.938502 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rwalk tag 0 (4433e 25ef61ee '') 
I1216 06:53:25.938598 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tstat tag 0 fid 2
I1216 06:53:25.938631 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rstat tag 0 st ('test-1765868003827322411' 'jenkins' 'jenkins' '' q (4433e 25ef61ee '') m 644 at 0 mt 1765868003 l 24 t 0 d 0 ext )
I1216 06:53:25.938760 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tstat tag 0 fid 2
I1216 06:53:25.938801 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rstat tag 0 st ('test-1765868003827322411' 'jenkins' 'jenkins' '' q (4433e 25ef61ee '') m 644 at 0 mt 1765868003 l 24 t 0 d 0 ext )
I1216 06:53:25.938919 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tclunk tag 0 fid 2
I1216 06:53:25.938941 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rclunk tag 0
I1216 06:53:25.939080 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1216 06:53:25.939112 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rwalk tag 0 (44324 25ef61ee '') 
I1216 06:53:25.939227 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tstat tag 0 fid 2
I1216 06:53:25.939261 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (44324 25ef61ee '') m 644 at 0 mt 1765868003 l 24 t 0 d 0 ext )
I1216 06:53:25.939397 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tstat tag 0 fid 2
I1216 06:53:25.939432 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (44324 25ef61ee '') m 644 at 0 mt 1765868003 l 24 t 0 d 0 ext )
I1216 06:53:25.939529 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tclunk tag 0 fid 2
I1216 06:53:25.939560 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rclunk tag 0
I1216 06:53:25.939682 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tread tag 0 fid 1 offset 258 count 262120
I1216 06:53:25.939724 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rread tag 0 count 0
I1216 06:53:25.939869 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tclunk tag 0 fid 1
I1216 06:53:25.939903 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rclunk tag 0
I1216 06:53:25.941204 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1216 06:53:25.941274 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rerror tag 0 ename 'file not found' ecode 0
I1216 06:53:26.209700 1655096 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55504 Tclunk tag 0 fid 0
I1216 06:53:26.209749 1655096 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55504 Rclunk tag 0
I1216 06:53:26.210927 1655096 main.go:127] stdlog: ufs.go:147 disconnected
I1216 06:53:26.233089 1655096 out.go:179] * Unmounting /mount-9p ...
I1216 06:53:26.236110 1655096 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1216 06:53:26.243235 1655096 mount.go:180] unmount for /mount-9p ran successfully
I1216 06:53:26.243353 1655096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/.mount-process: {Name:mke844fbeb5523a1c871f2730c8f210e361eb3e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 06:53:26.246488 1655096 out.go:203] 
W1216 06:53:26.249426 1655096 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1216 06:53:26.252450 1655096 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (391.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1216 07:06:06.670916 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 07:06:06.817719 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 07:08:08.325579 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-487532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-614518 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: exit status 80 (6m27.79374408s)

                                                
                                                
-- stdout --
	* [ha-614518] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22141
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-614518" primary control-plane node in "ha-614518" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	* Enabled addons: 
	
	* Starting "ha-614518-m02" control-plane node in "ha-614518" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	* Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	* Starting "ha-614518-m04" worker node in "ha-614518" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2,192.168.49.3
	* Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	  - env NO_PROXY=192.168.49.2
	  - env NO_PROXY=192.168.49.2,192.168.49.3
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 07:03:44.880217 1687487 out.go:360] Setting OutFile to fd 1 ...
	I1216 07:03:44.880366 1687487 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 07:03:44.880378 1687487 out.go:374] Setting ErrFile to fd 2...
	I1216 07:03:44.880384 1687487 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 07:03:44.880665 1687487 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 07:03:44.881079 1687487 out.go:368] Setting JSON to false
	I1216 07:03:44.882032 1687487 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":35176,"bootTime":1765833449,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1216 07:03:44.882105 1687487 start.go:143] virtualization:  
	I1216 07:03:44.885307 1687487 out.go:179] * [ha-614518] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 07:03:44.889019 1687487 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 07:03:44.889105 1687487 notify.go:221] Checking for updates...
	I1216 07:03:44.894878 1687487 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 07:03:44.897985 1687487 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 07:03:44.900761 1687487 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube
	I1216 07:03:44.903578 1687487 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 07:03:44.906467 1687487 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 07:03:44.909985 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:03:44.910567 1687487 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 07:03:44.945233 1687487 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 07:03:44.945374 1687487 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 07:03:45.031657 1687487 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-16 07:03:45.011244188 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 07:03:45.031829 1687487 docker.go:319] overlay module found
	I1216 07:03:45.037435 1687487 out.go:179] * Using the docker driver based on existing profile
	I1216 07:03:45.040996 1687487 start.go:309] selected driver: docker
	I1216 07:03:45.041023 1687487 start.go:927] validating driver "docker" against &{Name:ha-614518 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-614518 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 07:03:45.041175 1687487 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 07:03:45.041288 1687487 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 07:03:45.134661 1687487 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-16 07:03:45.119026433 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 07:03:45.135091 1687487 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 07:03:45.135120 1687487 cni.go:84] Creating CNI manager for ""
	I1216 07:03:45.135176 1687487 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1216 07:03:45.135234 1687487 start.go:353] cluster config:
	{Name:ha-614518 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-614518 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 07:03:45.149972 1687487 out.go:179] * Starting "ha-614518" primary control-plane node in "ha-614518" cluster
	I1216 07:03:45.153136 1687487 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 07:03:45.159266 1687487 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 07:03:45.170928 1687487 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 07:03:45.170953 1687487 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 07:03:45.171004 1687487 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1216 07:03:45.171018 1687487 cache.go:65] Caching tarball of preloaded images
	I1216 07:03:45.171117 1687487 preload.go:238] Found /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 07:03:45.171128 1687487 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 07:03:45.171285 1687487 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/config.json ...
	I1216 07:03:45.215544 1687487 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 07:03:45.215626 1687487 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 07:03:45.215662 1687487 cache.go:243] Successfully downloaded all kic artifacts
	I1216 07:03:45.215843 1687487 start.go:360] acquireMachinesLock for ha-614518: {Name:mk3b1063af1f3d64814d71b86469148e674fab2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 07:03:45.216121 1687487 start.go:364] duration metric: took 138.127µs to acquireMachinesLock for "ha-614518"
	I1216 07:03:45.216289 1687487 start.go:96] Skipping create...Using existing machine configuration
	I1216 07:03:45.216367 1687487 fix.go:54] fixHost starting: 
	I1216 07:03:45.217861 1687487 cli_runner.go:164] Run: docker container inspect ha-614518 --format={{.State.Status}}
	I1216 07:03:45.257760 1687487 fix.go:112] recreateIfNeeded on ha-614518: state=Stopped err=<nil>
	W1216 07:03:45.257825 1687487 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 07:03:45.263736 1687487 out.go:252] * Restarting existing docker container for "ha-614518" ...
	I1216 07:03:45.263878 1687487 cli_runner.go:164] Run: docker start ha-614518
	I1216 07:03:45.543794 1687487 cli_runner.go:164] Run: docker container inspect ha-614518 --format={{.State.Status}}
	I1216 07:03:45.563314 1687487 kic.go:430] container "ha-614518" state is running.
	I1216 07:03:45.563689 1687487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614518
	I1216 07:03:45.584894 1687487 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/config.json ...
	I1216 07:03:45.585139 1687487 machine.go:94] provisionDockerMachine start ...
	I1216 07:03:45.585210 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518
	I1216 07:03:45.605415 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:03:45.606022 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34310 <nil> <nil>}
	I1216 07:03:45.606037 1687487 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 07:03:45.607343 1687487 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36692->127.0.0.1:34310: read: connection reset by peer
	I1216 07:03:48.740166 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-614518
	
	I1216 07:03:48.740200 1687487 ubuntu.go:182] provisioning hostname "ha-614518"
	I1216 07:03:48.740337 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518
	I1216 07:03:48.763945 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:03:48.764266 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34310 <nil> <nil>}
	I1216 07:03:48.764282 1687487 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-614518 && echo "ha-614518" | sudo tee /etc/hostname
	I1216 07:03:48.905449 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-614518
	
	I1216 07:03:48.905536 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518
	I1216 07:03:48.922159 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:03:48.922475 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34310 <nil> <nil>}
	I1216 07:03:48.922498 1687487 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-614518' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-614518/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-614518' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 07:03:49.056835 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 07:03:49.056862 1687487 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-1596013/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-1596013/.minikube}
	I1216 07:03:49.056897 1687487 ubuntu.go:190] setting up certificates
	I1216 07:03:49.056913 1687487 provision.go:84] configureAuth start
	I1216 07:03:49.056990 1687487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614518
	I1216 07:03:49.074475 1687487 provision.go:143] copyHostCerts
	I1216 07:03:49.074521 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 07:03:49.074564 1687487 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem, removing ...
	I1216 07:03:49.074584 1687487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 07:03:49.074664 1687487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem (1078 bytes)
	I1216 07:03:49.074753 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 07:03:49.074776 1687487 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem, removing ...
	I1216 07:03:49.074785 1687487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 07:03:49.074812 1687487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem (1123 bytes)
	I1216 07:03:49.074873 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 07:03:49.074892 1687487 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem, removing ...
	I1216 07:03:49.074902 1687487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 07:03:49.074929 1687487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem (1675 bytes)
	I1216 07:03:49.074985 1687487 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem org=jenkins.ha-614518 san=[127.0.0.1 192.168.49.2 ha-614518 localhost minikube]
	I1216 07:03:49.677070 1687487 provision.go:177] copyRemoteCerts
	I1216 07:03:49.677146 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 07:03:49.677189 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518
	I1216 07:03:49.696012 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34310 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518/id_rsa Username:docker}
	I1216 07:03:49.796234 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1216 07:03:49.796294 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 07:03:49.813987 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1216 07:03:49.814051 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1216 07:03:49.832994 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1216 07:03:49.833117 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 07:03:49.852358 1687487 provision.go:87] duration metric: took 795.417685ms to configureAuth
	I1216 07:03:49.852395 1687487 ubuntu.go:206] setting minikube options for container-runtime
	I1216 07:03:49.852668 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:03:49.852778 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518
	I1216 07:03:49.870814 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:03:49.871144 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34310 <nil> <nil>}
	I1216 07:03:49.871168 1687487 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 07:03:50.263536 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 07:03:50.263563 1687487 machine.go:97] duration metric: took 4.678406656s to provisionDockerMachine
	I1216 07:03:50.263587 1687487 start.go:293] postStartSetup for "ha-614518" (driver="docker")
	I1216 07:03:50.263599 1687487 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 07:03:50.263688 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 07:03:50.263741 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518
	I1216 07:03:50.288161 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34310 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518/id_rsa Username:docker}
	I1216 07:03:50.388424 1687487 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 07:03:50.391627 1687487 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 07:03:50.391661 1687487 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 07:03:50.391673 1687487 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/addons for local assets ...
	I1216 07:03:50.391729 1687487 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/files for local assets ...
	I1216 07:03:50.391823 1687487 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> 15992552.pem in /etc/ssl/certs
	I1216 07:03:50.391835 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> /etc/ssl/certs/15992552.pem
	I1216 07:03:50.391942 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 07:03:50.399136 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 07:03:50.417106 1687487 start.go:296] duration metric: took 153.503323ms for postStartSetup
	I1216 07:03:50.417188 1687487 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 07:03:50.417231 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518
	I1216 07:03:50.433965 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34310 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518/id_rsa Username:docker}
	I1216 07:03:50.525944 1687487 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 07:03:50.531286 1687487 fix.go:56] duration metric: took 5.314914646s for fixHost
	I1216 07:03:50.531388 1687487 start.go:83] releasing machines lock for "ha-614518", held for 5.315142989s
	I1216 07:03:50.531501 1687487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614518
	I1216 07:03:50.548584 1687487 ssh_runner.go:195] Run: cat /version.json
	I1216 07:03:50.548651 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518
	I1216 07:03:50.548722 1687487 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 07:03:50.548786 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518
	I1216 07:03:50.573896 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34310 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518/id_rsa Username:docker}
	I1216 07:03:50.582211 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34310 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518/id_rsa Username:docker}
	I1216 07:03:50.773920 1687487 ssh_runner.go:195] Run: systemctl --version
	I1216 07:03:50.780399 1687487 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 07:03:50.815666 1687487 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 07:03:50.820120 1687487 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 07:03:50.820193 1687487 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 07:03:50.828039 1687487 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 07:03:50.828121 1687487 start.go:496] detecting cgroup driver to use...
	I1216 07:03:50.828169 1687487 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 07:03:50.828249 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 07:03:50.844121 1687487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 07:03:50.857243 1687487 docker.go:218] disabling cri-docker service (if available) ...
	I1216 07:03:50.857381 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 07:03:50.873095 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 07:03:50.886187 1687487 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 07:03:51.006275 1687487 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 07:03:51.140914 1687487 docker.go:234] disabling docker service ...
	I1216 07:03:51.140991 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 07:03:51.157238 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 07:03:51.171898 1687487 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 07:03:51.287675 1687487 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 07:03:51.421310 1687487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 07:03:51.434905 1687487 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 07:03:51.449226 1687487 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 07:03:51.449297 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:03:51.458120 1687487 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 07:03:51.458190 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:03:51.467336 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:03:51.476031 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:03:51.484943 1687487 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 07:03:51.493309 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:03:51.502592 1687487 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:03:51.511462 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:03:51.520904 1687487 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 07:03:51.528691 1687487 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 07:03:51.536073 1687487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 07:03:51.644582 1687487 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 07:03:51.813587 1687487 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 07:03:51.813682 1687487 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 07:03:51.818257 1687487 start.go:564] Will wait 60s for crictl version
	I1216 07:03:51.818378 1687487 ssh_runner.go:195] Run: which crictl
	I1216 07:03:51.822136 1687487 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 07:03:51.848811 1687487 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 07:03:51.848971 1687487 ssh_runner.go:195] Run: crio --version
	I1216 07:03:51.877270 1687487 ssh_runner.go:195] Run: crio --version
	I1216 07:03:51.911920 1687487 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 07:03:51.914805 1687487 cli_runner.go:164] Run: docker network inspect ha-614518 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 07:03:51.931261 1687487 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 07:03:51.935082 1687487 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 07:03:51.945205 1687487 kubeadm.go:884] updating cluster {Name:ha-614518 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-614518 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 07:03:51.945357 1687487 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 07:03:51.945422 1687487 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 07:03:51.979077 1687487 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 07:03:51.979106 1687487 crio.go:433] Images already preloaded, skipping extraction
	I1216 07:03:51.979163 1687487 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 07:03:52.008543 1687487 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 07:03:52.008569 1687487 cache_images.go:86] Images are preloaded, skipping loading
	I1216 07:03:52.008578 1687487 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1216 07:03:52.008687 1687487 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-614518 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-614518 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 07:03:52.008783 1687487 ssh_runner.go:195] Run: crio config
	I1216 07:03:52.064647 1687487 cni.go:84] Creating CNI manager for ""
	I1216 07:03:52.064671 1687487 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1216 07:03:52.064694 1687487 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 07:03:52.064717 1687487 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-614518 NodeName:ha-614518 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 07:03:52.064852 1687487 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-614518"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 07:03:52.064876 1687487 kube-vip.go:115] generating kube-vip config ...
	I1216 07:03:52.064936 1687487 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1216 07:03:52.077257 1687487 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1216 07:03:52.077367 1687487 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1216 07:03:52.077440 1687487 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 07:03:52.085615 1687487 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 07:03:52.085717 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1216 07:03:52.093632 1687487 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1216 07:03:52.107221 1687487 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 07:03:52.120189 1687487 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1216 07:03:52.132971 1687487 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1216 07:03:52.145766 1687487 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1216 07:03:52.149312 1687487 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 07:03:52.158923 1687487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 07:03:52.283710 1687487 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 07:03:52.301582 1687487 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518 for IP: 192.168.49.2
	I1216 07:03:52.301603 1687487 certs.go:195] generating shared ca certs ...
	I1216 07:03:52.301620 1687487 certs.go:227] acquiring lock for ca certs: {Name:mkbf72d2e438185e2867d262e148d82e5455cccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:03:52.301773 1687487 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key
	I1216 07:03:52.301822 1687487 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key
	I1216 07:03:52.301833 1687487 certs.go:257] generating profile certs ...
	I1216 07:03:52.301907 1687487 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/client.key
	I1216 07:03:52.301945 1687487 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.key.d39b37a1
	I1216 07:03:52.301963 1687487 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.crt.d39b37a1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1216 07:03:52.415504 1687487 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.crt.d39b37a1 ...
	I1216 07:03:52.415537 1687487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.crt.d39b37a1: {Name:mk670a19d587f16baf0df889e9e917056f8f5261 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:03:52.415731 1687487 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.key.d39b37a1 ...
	I1216 07:03:52.415747 1687487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.key.d39b37a1: {Name:mk54bea57dae6ed1500bec8bfd5028c4fbd13a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:03:52.415839 1687487 certs.go:382] copying /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.crt.d39b37a1 -> /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.crt
	I1216 07:03:52.415977 1687487 certs.go:386] copying /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.key.d39b37a1 -> /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.key
	I1216 07:03:52.416116 1687487 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/proxy-client.key
	I1216 07:03:52.416135 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1216 07:03:52.416152 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1216 07:03:52.416168 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1216 07:03:52.416186 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1216 07:03:52.416197 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1216 07:03:52.416215 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1216 07:03:52.416235 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1216 07:03:52.416253 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1216 07:03:52.416304 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem (1338 bytes)
	W1216 07:03:52.416340 1687487 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255_empty.pem, impossibly tiny 0 bytes
	I1216 07:03:52.416355 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 07:03:52.416384 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem (1078 bytes)
	I1216 07:03:52.416413 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem (1123 bytes)
	I1216 07:03:52.416440 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem (1675 bytes)
	I1216 07:03:52.416515 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 07:03:52.416550 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:03:52.416569 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem -> /usr/share/ca-certificates/1599255.pem
	I1216 07:03:52.416583 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> /usr/share/ca-certificates/15992552.pem
	I1216 07:03:52.417145 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 07:03:52.438246 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 07:03:52.458550 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 07:03:52.483806 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 07:03:52.504536 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1216 07:03:52.531165 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 07:03:52.551893 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 07:03:52.571589 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 07:03:52.590649 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 07:03:52.610138 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem --> /usr/share/ca-certificates/1599255.pem (1338 bytes)
	I1216 07:03:52.630965 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /usr/share/ca-certificates/15992552.pem (1708 bytes)
	I1216 07:03:52.650790 1687487 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 07:03:52.664186 1687487 ssh_runner.go:195] Run: openssl version
	I1216 07:03:52.671337 1687487 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:03:52.678844 1687487 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 07:03:52.686401 1687487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:03:52.690368 1687487 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:03:52.690436 1687487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:03:52.731470 1687487 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 07:03:52.738706 1687487 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1599255.pem
	I1216 07:03:52.745967 1687487 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1599255.pem /etc/ssl/certs/1599255.pem
	I1216 07:03:52.753284 1687487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1599255.pem
	I1216 07:03:52.757015 1687487 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 06:24 /usr/share/ca-certificates/1599255.pem
	I1216 07:03:52.757119 1687487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1599255.pem
	I1216 07:03:52.798254 1687487 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 07:03:52.805456 1687487 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15992552.pem
	I1216 07:03:52.812464 1687487 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15992552.pem /etc/ssl/certs/15992552.pem
	I1216 07:03:52.820202 1687487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15992552.pem
	I1216 07:03:52.823851 1687487 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 06:24 /usr/share/ca-certificates/15992552.pem
	I1216 07:03:52.823958 1687487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15992552.pem
	I1216 07:03:52.864891 1687487 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 07:03:52.872666 1687487 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 07:03:52.876565 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 07:03:52.917593 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 07:03:52.962371 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 07:03:53.011634 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 07:03:53.070012 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 07:03:53.127584 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 07:03:53.215856 1687487 kubeadm.go:401] StartCluster: {Name:ha-614518 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-614518 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 07:03:53.216035 1687487 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 07:03:53.216134 1687487 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 07:03:53.263680 1687487 cri.go:89] found id: "11e4b44d62d5436a07f6d8edd733f4092c09af04d3fa6130a9ee2d504c2d7b92"
	I1216 07:03:53.263744 1687487 cri.go:89] found id: "69514719ce90eebffbe68b0ace74e14259ceea7c07980c6918b6af6e8b91ba10"
	I1216 07:03:53.263764 1687487 cri.go:89] found id: "b6e4d702970e634028ab9da9ca8e258d02bb0aa908a74a428d72bd35cdec320d"
	I1216 07:03:53.263787 1687487 cri.go:89] found id: "c0e9d15ebb1cd884461c491d76b9c135253b28403f1a18a97c1bdb68443fe858"
	I1216 07:03:53.263822 1687487 cri.go:89] found id: "db591d0d437f81b8c65552b6efbd2ca8fb29bb1e0989d62b2cce8be69b46105c"
	I1216 07:03:53.263846 1687487 cri.go:89] found id: ""
	I1216 07:03:53.263924 1687487 ssh_runner.go:195] Run: sudo runc list -f json
	W1216 07:03:53.279629 1687487 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T07:03:53Z" level=error msg="open /run/runc: no such file or directory"
	I1216 07:03:53.279752 1687487 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 07:03:53.291564 1687487 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 07:03:53.291626 1687487 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 07:03:53.291717 1687487 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 07:03:53.306008 1687487 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 07:03:53.306492 1687487 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-614518" does not appear in /home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 07:03:53.306648 1687487 kubeconfig.go:62] /home/jenkins/minikube-integration/22141-1596013/kubeconfig needs updating (will repair): [kubeconfig missing "ha-614518" cluster setting kubeconfig missing "ha-614518" context setting]
	I1216 07:03:53.306941 1687487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/kubeconfig: {Name:mk61a8e87d869d27c5acc78145bae6b02a8088a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:03:53.307502 1687487 kapi.go:59] client config for ha-614518: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/client.crt", KeyFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/client.key", CAFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 07:03:53.308322 1687487 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1216 07:03:53.308427 1687487 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 07:03:53.308488 1687487 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1216 07:03:53.308515 1687487 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1216 07:03:53.308406 1687487 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1216 07:03:53.308623 1687487 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 07:03:53.308936 1687487 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 07:03:53.317737 1687487 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1216 07:03:53.317797 1687487 kubeadm.go:602] duration metric: took 26.14434ms to restartPrimaryControlPlane
	I1216 07:03:53.317823 1687487 kubeadm.go:403] duration metric: took 101.97493ms to StartCluster
	I1216 07:03:53.317854 1687487 settings.go:142] acquiring lock: {Name:mk011eec7aa10b3db81dce3dc7edf51f985e2ce2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:03:53.317948 1687487 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 07:03:53.318556 1687487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/kubeconfig: {Name:mk61a8e87d869d27c5acc78145bae6b02a8088a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:03:53.318810 1687487 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 07:03:53.318859 1687487 start.go:242] waiting for startup goroutines ...
	I1216 07:03:53.318894 1687487 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 07:03:53.319377 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:03:53.323257 1687487 out.go:179] * Enabled addons: 
	I1216 07:03:53.326246 1687487 addons.go:530] duration metric: took 7.35197ms for enable addons: enabled=[]
	I1216 07:03:53.326324 1687487 start.go:247] waiting for cluster config update ...
	I1216 07:03:53.326358 1687487 start.go:256] writing updated cluster config ...
	I1216 07:03:53.329613 1687487 out.go:203] 
	I1216 07:03:53.332888 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:03:53.333052 1687487 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/config.json ...
	I1216 07:03:53.336576 1687487 out.go:179] * Starting "ha-614518-m02" control-plane node in "ha-614518" cluster
	I1216 07:03:53.339553 1687487 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 07:03:53.342482 1687487 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 07:03:53.345454 1687487 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 07:03:53.345546 1687487 cache.go:65] Caching tarball of preloaded images
	I1216 07:03:53.345514 1687487 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 07:03:53.345877 1687487 preload.go:238] Found /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 07:03:53.345913 1687487 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 07:03:53.346063 1687487 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/config.json ...
	I1216 07:03:53.363377 1687487 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 07:03:53.363397 1687487 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 07:03:53.363414 1687487 cache.go:243] Successfully downloaded all kic artifacts
	I1216 07:03:53.363438 1687487 start.go:360] acquireMachinesLock for ha-614518-m02: {Name:mka615bda267fcf7df6d6dfdc68cac769a75315d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 07:03:53.363497 1687487 start.go:364] duration metric: took 36.119µs to acquireMachinesLock for "ha-614518-m02"
	I1216 07:03:53.363523 1687487 start.go:96] Skipping create...Using existing machine configuration
	I1216 07:03:53.363534 1687487 fix.go:54] fixHost starting: m02
	I1216 07:03:53.363791 1687487 cli_runner.go:164] Run: docker container inspect ha-614518-m02 --format={{.State.Status}}
	I1216 07:03:53.383383 1687487 fix.go:112] recreateIfNeeded on ha-614518-m02: state=Stopped err=<nil>
	W1216 07:03:53.383415 1687487 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 07:03:53.386537 1687487 out.go:252] * Restarting existing docker container for "ha-614518-m02" ...
	I1216 07:03:53.386636 1687487 cli_runner.go:164] Run: docker start ha-614518-m02
	I1216 07:03:53.794943 1687487 cli_runner.go:164] Run: docker container inspect ha-614518-m02 --format={{.State.Status}}
	I1216 07:03:53.822138 1687487 kic.go:430] container "ha-614518-m02" state is running.
	I1216 07:03:53.822535 1687487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614518-m02
	I1216 07:03:53.851090 1687487 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/config.json ...
	I1216 07:03:53.851356 1687487 machine.go:94] provisionDockerMachine start ...
	I1216 07:03:53.851426 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m02
	I1216 07:03:53.878317 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:03:53.878677 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34315 <nil> <nil>}
	I1216 07:03:53.878696 1687487 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 07:03:53.879342 1687487 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1216 07:03:57.124004 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-614518-m02
	
	I1216 07:03:57.124068 1687487 ubuntu.go:182] provisioning hostname "ha-614518-m02"
	I1216 07:03:57.124164 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m02
	I1216 07:03:57.173735 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:03:57.174061 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34315 <nil> <nil>}
	I1216 07:03:57.174078 1687487 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-614518-m02 && echo "ha-614518-m02" | sudo tee /etc/hostname
	I1216 07:03:57.438628 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-614518-m02
	
	I1216 07:03:57.438749 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m02
	I1216 07:03:57.472722 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:03:57.473050 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34315 <nil> <nil>}
	I1216 07:03:57.473073 1687487 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-614518-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-614518-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-614518-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 07:03:57.677870 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 07:03:57.677921 1687487 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-1596013/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-1596013/.minikube}
	I1216 07:03:57.677946 1687487 ubuntu.go:190] setting up certificates
	I1216 07:03:57.677958 1687487 provision.go:84] configureAuth start
	I1216 07:03:57.678055 1687487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614518-m02
	I1216 07:03:57.722106 1687487 provision.go:143] copyHostCerts
	I1216 07:03:57.722151 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 07:03:57.722185 1687487 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem, removing ...
	I1216 07:03:57.722198 1687487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 07:03:57.722276 1687487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem (1078 bytes)
	I1216 07:03:57.722357 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 07:03:57.722379 1687487 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem, removing ...
	I1216 07:03:57.722388 1687487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 07:03:57.722421 1687487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem (1123 bytes)
	I1216 07:03:57.722465 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 07:03:57.722489 1687487 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem, removing ...
	I1216 07:03:57.722498 1687487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 07:03:57.722529 1687487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem (1675 bytes)
	I1216 07:03:57.722633 1687487 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem org=jenkins.ha-614518-m02 san=[127.0.0.1 192.168.49.3 ha-614518-m02 localhost minikube]
	I1216 07:03:57.844425 1687487 provision.go:177] copyRemoteCerts
	I1216 07:03:57.844504 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 07:03:57.844548 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m02
	I1216 07:03:57.862917 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34315 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518-m02/id_rsa Username:docker}
	I1216 07:03:57.972376 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1216 07:03:57.972445 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 07:03:58.017243 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1216 07:03:58.017311 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1216 07:03:58.059767 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1216 07:03:58.059828 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 07:03:58.113177 1687487 provision.go:87] duration metric: took 435.20178ms to configureAuth
	I1216 07:03:58.113246 1687487 ubuntu.go:206] setting minikube options for container-runtime
	I1216 07:03:58.113513 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:03:58.113663 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m02
	I1216 07:03:58.142721 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:03:58.143019 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34315 <nil> <nil>}
	I1216 07:03:58.143032 1687487 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 07:03:59.702077 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 07:03:59.702157 1687487 machine.go:97] duration metric: took 5.850782021s to provisionDockerMachine
	I1216 07:03:59.702183 1687487 start.go:293] postStartSetup for "ha-614518-m02" (driver="docker")
	I1216 07:03:59.702253 1687487 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 07:03:59.702337 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 07:03:59.702409 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m02
	I1216 07:03:59.738247 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34315 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518-m02/id_rsa Username:docker}
	I1216 07:03:59.855085 1687487 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 07:03:59.858756 1687487 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 07:03:59.858785 1687487 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 07:03:59.858797 1687487 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/addons for local assets ...
	I1216 07:03:59.858854 1687487 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/files for local assets ...
	I1216 07:03:59.858930 1687487 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> 15992552.pem in /etc/ssl/certs
	I1216 07:03:59.858937 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> /etc/ssl/certs/15992552.pem
	I1216 07:03:59.859038 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 07:03:59.868409 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 07:03:59.890719 1687487 start.go:296] duration metric: took 188.504339ms for postStartSetup
	I1216 07:03:59.890855 1687487 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 07:03:59.890922 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m02
	I1216 07:03:59.909691 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34315 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518-m02/id_rsa Username:docker}
	I1216 07:04:00.010830 1687487 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 07:04:00.053896 1687487 fix.go:56] duration metric: took 6.690353109s for fixHost
	I1216 07:04:00.053984 1687487 start.go:83] releasing machines lock for "ha-614518-m02", held for 6.690472315s
	I1216 07:04:00.054132 1687487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614518-m02
	I1216 07:04:00.100321 1687487 out.go:179] * Found network options:
	I1216 07:04:00.105391 1687487 out.go:179]   - NO_PROXY=192.168.49.2
	W1216 07:04:00.108450 1687487 proxy.go:120] fail to check proxy env: Error ip not in block
	W1216 07:04:00.108636 1687487 proxy.go:120] fail to check proxy env: Error ip not in block
	I1216 07:04:00.108742 1687487 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 07:04:00.108814 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m02
	I1216 07:04:00.109177 1687487 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 07:04:00.115341 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m02
	I1216 07:04:00.165700 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34315 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518-m02/id_rsa Username:docker}
	I1216 07:04:00.232046 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34315 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518-m02/id_rsa Username:docker}
	I1216 07:04:00.645936 1687487 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 07:04:00.658871 1687487 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 07:04:00.658994 1687487 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 07:04:00.687970 1687487 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 07:04:00.688053 1687487 start.go:496] detecting cgroup driver to use...
	I1216 07:04:00.688101 1687487 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 07:04:00.688186 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 07:04:00.715577 1687487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 07:04:00.751617 1687487 docker.go:218] disabling cri-docker service (if available) ...
	I1216 07:04:00.751681 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 07:04:00.778303 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 07:04:00.802164 1687487 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 07:04:01.047882 1687487 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 07:04:01.301807 1687487 docker.go:234] disabling docker service ...
	I1216 07:04:01.301880 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 07:04:01.322236 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 07:04:01.348117 1687487 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 07:04:01.593311 1687487 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 07:04:01.834030 1687487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 07:04:01.858526 1687487 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 07:04:01.886506 1687487 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 07:04:01.886622 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:04:01.922317 1687487 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 07:04:01.922463 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:04:01.953232 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:04:01.971302 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:04:01.993804 1687487 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 07:04:02.013934 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:04:02.031424 1687487 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:04:02.046246 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:04:02.066027 1687487 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 07:04:02.080394 1687487 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 07:04:02.095283 1687487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 07:04:02.419550 1687487 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 07:05:32.857802 1687487 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.438149921s)
	I1216 07:05:32.857827 1687487 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 07:05:32.857897 1687487 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 07:05:32.861796 1687487 start.go:564] Will wait 60s for crictl version
	I1216 07:05:32.861879 1687487 ssh_runner.go:195] Run: which crictl
	I1216 07:05:32.865559 1687487 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 07:05:32.893251 1687487 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 07:05:32.893334 1687487 ssh_runner.go:195] Run: crio --version
	I1216 07:05:32.921229 1687487 ssh_runner.go:195] Run: crio --version
	I1216 07:05:32.960111 1687487 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 07:05:32.963074 1687487 out.go:179]   - env NO_PROXY=192.168.49.2
	I1216 07:05:32.965965 1687487 cli_runner.go:164] Run: docker network inspect ha-614518 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 07:05:32.983713 1687487 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 07:05:32.988187 1687487 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 07:05:32.998448 1687487 mustload.go:66] Loading cluster: ha-614518
	I1216 07:05:32.998787 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:05:32.999107 1687487 cli_runner.go:164] Run: docker container inspect ha-614518 --format={{.State.Status}}
	I1216 07:05:33.020295 1687487 host.go:66] Checking if "ha-614518" exists ...
	I1216 07:05:33.020623 1687487 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518 for IP: 192.168.49.3
	I1216 07:05:33.020635 1687487 certs.go:195] generating shared ca certs ...
	I1216 07:05:33.020650 1687487 certs.go:227] acquiring lock for ca certs: {Name:mkbf72d2e438185e2867d262e148d82e5455cccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:05:33.020784 1687487 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key
	I1216 07:05:33.020838 1687487 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key
	I1216 07:05:33.020847 1687487 certs.go:257] generating profile certs ...
	I1216 07:05:33.020922 1687487 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/client.key
	I1216 07:05:33.020982 1687487 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.key.10d34f0f
	I1216 07:05:33.021018 1687487 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/proxy-client.key
	I1216 07:05:33.021037 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1216 07:05:33.021050 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1216 07:05:33.021075 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1216 07:05:33.021088 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1216 07:05:33.021102 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1216 07:05:33.021114 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1216 07:05:33.021125 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1216 07:05:33.021135 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1216 07:05:33.021191 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem (1338 bytes)
	W1216 07:05:33.021222 1687487 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255_empty.pem, impossibly tiny 0 bytes
	I1216 07:05:33.021230 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 07:05:33.021255 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem (1078 bytes)
	I1216 07:05:33.021279 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem (1123 bytes)
	I1216 07:05:33.021303 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem (1675 bytes)
	I1216 07:05:33.021363 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 07:05:33.021393 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:05:33.021405 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem -> /usr/share/ca-certificates/1599255.pem
	I1216 07:05:33.021415 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> /usr/share/ca-certificates/15992552.pem
	I1216 07:05:33.021480 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518
	I1216 07:05:33.040303 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34310 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518/id_rsa Username:docker}
	I1216 07:05:33.132825 1687487 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1216 07:05:33.136811 1687487 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1216 07:05:33.145267 1687487 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1216 07:05:33.148926 1687487 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1216 07:05:33.157749 1687487 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1216 07:05:33.161324 1687487 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1216 07:05:33.170007 1687487 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1216 07:05:33.174232 1687487 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1216 07:05:33.182495 1687487 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1216 07:05:33.186607 1687487 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1216 07:05:33.194939 1687487 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1216 07:05:33.198815 1687487 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1216 07:05:33.207734 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 07:05:33.226981 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 07:05:33.246475 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 07:05:33.265061 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 07:05:33.284210 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1216 07:05:33.306195 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 07:05:33.324956 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 07:05:33.343476 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 07:05:33.361548 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 07:05:33.380428 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem --> /usr/share/ca-certificates/1599255.pem (1338 bytes)
	I1216 07:05:33.398886 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /usr/share/ca-certificates/15992552.pem (1708 bytes)
	I1216 07:05:33.416891 1687487 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1216 07:05:33.430017 1687487 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1216 07:05:33.442986 1687487 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1216 07:05:33.456178 1687487 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1216 07:05:33.469704 1687487 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1216 07:05:33.484299 1687487 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1216 07:05:33.499729 1687487 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1216 07:05:33.516041 1687487 ssh_runner.go:195] Run: openssl version
	I1216 07:05:33.524362 1687487 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1599255.pem
	I1216 07:05:33.532162 1687487 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1599255.pem /etc/ssl/certs/1599255.pem
	I1216 07:05:33.540324 1687487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1599255.pem
	I1216 07:05:33.544918 1687487 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 06:24 /usr/share/ca-certificates/1599255.pem
	I1216 07:05:33.544995 1687487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1599255.pem
	I1216 07:05:33.585992 1687487 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 07:05:33.593625 1687487 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15992552.pem
	I1216 07:05:33.601101 1687487 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15992552.pem /etc/ssl/certs/15992552.pem
	I1216 07:05:33.608445 1687487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15992552.pem
	I1216 07:05:33.613481 1687487 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 06:24 /usr/share/ca-certificates/15992552.pem
	I1216 07:05:33.613546 1687487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15992552.pem
	I1216 07:05:33.656579 1687487 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 07:05:33.664104 1687487 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:05:33.671624 1687487 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 07:05:33.679463 1687487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:05:33.683654 1687487 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:05:33.683720 1687487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:05:33.725052 1687487 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 07:05:33.733624 1687487 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 07:05:33.737572 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 07:05:33.781425 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 07:05:33.824276 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 07:05:33.865794 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 07:05:33.909050 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 07:05:33.951953 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 07:05:33.993867 1687487 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.2 crio true true} ...
	I1216 07:05:33.993976 1687487 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-614518-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-614518 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 07:05:33.994007 1687487 kube-vip.go:115] generating kube-vip config ...
	I1216 07:05:33.994059 1687487 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1216 07:05:34.009409 1687487 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1216 07:05:34.009486 1687487 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1216 07:05:34.009582 1687487 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 07:05:34.018576 1687487 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 07:05:34.018674 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1216 07:05:34.027410 1687487 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1216 07:05:34.042363 1687487 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 07:05:34.056182 1687487 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1216 07:05:34.074014 1687487 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1216 07:05:34.077990 1687487 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 07:05:34.088295 1687487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 07:05:34.232095 1687487 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 07:05:34.247231 1687487 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 07:05:34.247603 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:05:34.253170 1687487 out.go:179] * Verifying Kubernetes components...
	I1216 07:05:34.255848 1687487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 07:05:34.381731 1687487 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 07:05:34.396551 1687487 kapi.go:59] client config for ha-614518: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/client.crt", KeyFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/client.key", CAFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1216 07:05:34.396622 1687487 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1216 07:05:34.397115 1687487 node_ready.go:35] waiting up to 6m0s for node "ha-614518-m02" to be "Ready" ...
	I1216 07:05:37.040586 1687487 node_ready.go:49] node "ha-614518-m02" is "Ready"
	I1216 07:05:37.040621 1687487 node_ready.go:38] duration metric: took 2.643481502s for node "ha-614518-m02" to be "Ready" ...
	I1216 07:05:37.040635 1687487 api_server.go:52] waiting for apiserver process to appear ...
	I1216 07:05:37.040695 1687487 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:05:37.061374 1687487 api_server.go:72] duration metric: took 2.814094s to wait for apiserver process to appear ...
	I1216 07:05:37.061401 1687487 api_server.go:88] waiting for apiserver healthz status ...
	I1216 07:05:37.061420 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:37.074087 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:37.074124 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:37.561699 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:37.575722 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:37.575749 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:38.062105 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:38.073942 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:38.073979 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:38.561534 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:38.571539 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:38.571575 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:39.062243 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:39.070626 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:39.070656 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:39.562250 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:39.570668 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:39.570709 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:40.062490 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:40.071222 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:40.071258 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:40.561835 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:40.570234 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:40.570267 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:41.062517 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:41.070865 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:41.070907 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:41.562123 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:41.570314 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:41.570354 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:42.061560 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:42.070019 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:42.070066 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:42.561525 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:42.575709 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:42.575741 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:43.062386 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:43.072157 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:43.072235 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:43.561622 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:43.569766 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:43.569792 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:44.062378 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:44.073021 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:44.073060 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:44.562264 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:44.570578 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:44.570610 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:45.063004 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:45.074685 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:45.074724 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:45.562091 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:45.570321 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:45.570358 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:46.062073 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:46.070931 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:46.070966 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:46.561565 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:46.569995 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:46.570026 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:47.061616 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:47.072095 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:47.072131 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:47.561577 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:47.570812 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:47.570839 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:48.062047 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:48.070373 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:48.070403 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:48.562094 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:48.570453 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:48.570491 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:49.062122 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:49.070449 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:49.070490 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:49.561963 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:49.570228 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:49.570254 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:50.061859 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:50.070692 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:50.070727 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:50.562001 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:50.570230 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:50.570256 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:51.061757 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:51.070029 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:51.070062 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:51.561541 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:51.570443 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:51.570470 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:52.061863 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:52.070098 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:52.070127 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:52.561554 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:52.571992 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:52.572023 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:53.061596 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:53.069723 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:53.069756 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:53.562103 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:53.570175 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:53.570210 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:54.061674 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:54.069916 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:54.069946 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:54.561543 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:54.569758 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:54.569785 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:55.062452 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:55.071750 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:55.071778 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:55.562411 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:55.572141 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:55.572172 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:56.061606 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:56.070095 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:56.070177 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:56.561548 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:56.569665 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:56.569692 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:57.061801 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:57.069953 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:57.069981 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:57.561491 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:57.569864 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:57.569901 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:58.062468 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:58.070718 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:58.070747 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:58.562420 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:58.584824 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:58.584854 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:59.062385 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:59.070501 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:59.070541 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:59.561854 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:59.569961 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:59.569992 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:00.061869 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:00.114940 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:06:00.115034 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:00.561553 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:00.570378 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:06:00.570407 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:01.062023 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:01.070600 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:06:01.070633 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:01.562296 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:01.570659 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:06:01.570688 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:02.062180 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:02.070681 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:06:02.070728 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:02.562216 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:02.570655 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:06:02.570684 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:03.062338 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:03.071577 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:06:03.071605 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:03.562262 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:03.570378 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:06:03.570415 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:04.061866 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:04.070630 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:06:04.070665 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:04.562372 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:04.573063 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:06:04.573103 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:05.061594 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:05.070425 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1216 07:06:05.071905 1687487 api_server.go:141] control plane version: v1.34.2
	I1216 07:06:05.071945 1687487 api_server.go:131] duration metric: took 28.010531893s to wait for apiserver health ...
	I1216 07:06:05.071959 1687487 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 07:06:05.081048 1687487 system_pods.go:59] 26 kube-system pods found
	I1216 07:06:05.081158 1687487 system_pods.go:61] "coredns-66bc5c9577-j2dlk" [7cdee874-13b2-4689-accf-e066854554a5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 07:06:05.081176 1687487 system_pods.go:61] "coredns-66bc5c9577-wnl5v" [9256d5c3-7034-467c-8cd0-d6f4987701c7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 07:06:05.081183 1687487 system_pods.go:61] "etcd-ha-614518" [dec5e097-b96b-40dd-a2f9-a9182668648e] Running
	I1216 07:06:05.081188 1687487 system_pods.go:61] "etcd-ha-614518-m02" [5998a7f5-5092-4768-b87a-c510c308efda] Running
	I1216 07:06:05.081192 1687487 system_pods.go:61] "etcd-ha-614518-m03" [d0a65bae-d842-4e55-85d9-ae1d6429088c] Running
	I1216 07:06:05.081197 1687487 system_pods.go:61] "kindnet-4gbf2" [b5285121-5662-466c-929f-6fe0e623e252] Running
	I1216 07:06:05.081201 1687487 system_pods.go:61] "kindnet-kwm49" [3a07c975-5ae6-434e-a9da-c68833c8a6dc] Running
	I1216 07:06:05.081204 1687487 system_pods.go:61] "kindnet-qpdxp" [44975bb5-380a-4313-99bd-df7510492688] Running
	I1216 07:06:05.081208 1687487 system_pods.go:61] "kindnet-t2849" [14c37491-38c8-4d32-89e2-d5065c21a976] Running
	I1216 07:06:05.081223 1687487 system_pods.go:61] "kube-apiserver-ha-614518" [51b10c5f-bf67-430b-85d7-ba31c2602e9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 07:06:05.081228 1687487 system_pods.go:61] "kube-apiserver-ha-614518-m02" [b25aee21-ddf1-4fc7-87e2-92a70d851d7a] Running
	I1216 07:06:05.081233 1687487 system_pods.go:61] "kube-apiserver-ha-614518-m03" [79a42481-9723-4f77-aec4-5d5727a98c63] Running
	I1216 07:06:05.081244 1687487 system_pods.go:61] "kube-controller-manager-ha-614518" [42894aa1-df0a-43d9-9a93-5b6141db631c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 07:06:05.081249 1687487 system_pods.go:61] "kube-controller-manager-ha-614518-m02" [984e14b1-d933-4792-b225-65a0fce5c8ac] Running
	I1216 07:06:05.081262 1687487 system_pods.go:61] "kube-controller-manager-ha-614518-m03" [b455cb3c-7c98-4ec2-9ce0-36e5c2f3b8cf] Running
	I1216 07:06:05.081266 1687487 system_pods.go:61] "kube-proxy-4kdt5" [45eb7aa5-bb99-4da3-883f-cdd380715c71] Running
	I1216 07:06:05.081270 1687487 system_pods.go:61] "kube-proxy-bmxpt" [573f4950-4197-4e95-90e8-93a2ec8bd016] Running
	I1216 07:06:05.081276 1687487 system_pods.go:61] "kube-proxy-fhwcs" [f6d4a561-d45e-4149-b00a-9fc8ef22017f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 07:06:05.081291 1687487 system_pods.go:61] "kube-proxy-qqr57" [bfce576a-7733-4a72-acf8-33d64dd3287a] Running
	I1216 07:06:05.081296 1687487 system_pods.go:61] "kube-scheduler-ha-614518" [ce73c116-9a87-4180-add6-fb07eb04c9a0] Running
	I1216 07:06:05.081301 1687487 system_pods.go:61] "kube-scheduler-ha-614518-m02" [249b5f83-63be-4691-87b1-5e25e13865ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 07:06:05.081305 1687487 system_pods.go:61] "kube-scheduler-ha-614518-m03" [db57c26e-9813-4b2b-b70b-0a07ed119aaa] Running
	I1216 07:06:05.081309 1687487 system_pods.go:61] "kube-vip-ha-614518" [e7bcfc9a-42b0-4066-9bb1-4abf917e98b9] Running
	I1216 07:06:05.081313 1687487 system_pods.go:61] "kube-vip-ha-614518-m02" [e662027d-d25a-4273-bdb7-9e21f666839e] Running
	I1216 07:06:05.081317 1687487 system_pods.go:61] "kube-vip-ha-614518-m03" [edab6af2-c513-479d-a2c8-c474380ca5d9] Running
	I1216 07:06:05.081323 1687487 system_pods.go:61] "storage-provisioner" [c8b9c00b-10bc-423c-b16e-3f3cdb12e907] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 07:06:05.081329 1687487 system_pods.go:74] duration metric: took 9.364099ms to wait for pod list to return data ...
	I1216 07:06:05.081337 1687487 default_sa.go:34] waiting for default service account to be created ...
	I1216 07:06:05.084727 1687487 default_sa.go:45] found service account: "default"
	I1216 07:06:05.084759 1687487 default_sa.go:55] duration metric: took 3.415392ms for default service account to be created ...
	I1216 07:06:05.084770 1687487 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 07:06:05.092252 1687487 system_pods.go:86] 26 kube-system pods found
	I1216 07:06:05.092293 1687487 system_pods.go:89] "coredns-66bc5c9577-j2dlk" [7cdee874-13b2-4689-accf-e066854554a5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 07:06:05.092305 1687487 system_pods.go:89] "coredns-66bc5c9577-wnl5v" [9256d5c3-7034-467c-8cd0-d6f4987701c7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 07:06:05.092311 1687487 system_pods.go:89] "etcd-ha-614518" [dec5e097-b96b-40dd-a2f9-a9182668648e] Running
	I1216 07:06:05.092318 1687487 system_pods.go:89] "etcd-ha-614518-m02" [5998a7f5-5092-4768-b87a-c510c308efda] Running
	I1216 07:06:05.092322 1687487 system_pods.go:89] "etcd-ha-614518-m03" [d0a65bae-d842-4e55-85d9-ae1d6429088c] Running
	I1216 07:06:05.092327 1687487 system_pods.go:89] "kindnet-4gbf2" [b5285121-5662-466c-929f-6fe0e623e252] Running
	I1216 07:06:05.092331 1687487 system_pods.go:89] "kindnet-kwm49" [3a07c975-5ae6-434e-a9da-c68833c8a6dc] Running
	I1216 07:06:05.092336 1687487 system_pods.go:89] "kindnet-qpdxp" [44975bb5-380a-4313-99bd-df7510492688] Running
	I1216 07:06:05.092346 1687487 system_pods.go:89] "kindnet-t2849" [14c37491-38c8-4d32-89e2-d5065c21a976] Running
	I1216 07:06:05.092353 1687487 system_pods.go:89] "kube-apiserver-ha-614518" [51b10c5f-bf67-430b-85d7-ba31c2602e9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 07:06:05.092360 1687487 system_pods.go:89] "kube-apiserver-ha-614518-m02" [b25aee21-ddf1-4fc7-87e2-92a70d851d7a] Running
	I1216 07:06:05.092365 1687487 system_pods.go:89] "kube-apiserver-ha-614518-m03" [79a42481-9723-4f77-aec4-5d5727a98c63] Running
	I1216 07:06:05.092376 1687487 system_pods.go:89] "kube-controller-manager-ha-614518" [42894aa1-df0a-43d9-9a93-5b6141db631c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 07:06:05.092381 1687487 system_pods.go:89] "kube-controller-manager-ha-614518-m02" [984e14b1-d933-4792-b225-65a0fce5c8ac] Running
	I1216 07:06:05.092388 1687487 system_pods.go:89] "kube-controller-manager-ha-614518-m03" [b455cb3c-7c98-4ec2-9ce0-36e5c2f3b8cf] Running
	I1216 07:06:05.092392 1687487 system_pods.go:89] "kube-proxy-4kdt5" [45eb7aa5-bb99-4da3-883f-cdd380715c71] Running
	I1216 07:06:05.092399 1687487 system_pods.go:89] "kube-proxy-bmxpt" [573f4950-4197-4e95-90e8-93a2ec8bd016] Running
	I1216 07:06:05.092411 1687487 system_pods.go:89] "kube-proxy-fhwcs" [f6d4a561-d45e-4149-b00a-9fc8ef22017f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 07:06:05.092416 1687487 system_pods.go:89] "kube-proxy-qqr57" [bfce576a-7733-4a72-acf8-33d64dd3287a] Running
	I1216 07:06:05.092421 1687487 system_pods.go:89] "kube-scheduler-ha-614518" [ce73c116-9a87-4180-add6-fb07eb04c9a0] Running
	I1216 07:06:05.092426 1687487 system_pods.go:89] "kube-scheduler-ha-614518-m02" [249b5f83-63be-4691-87b1-5e25e13865ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 07:06:05.092433 1687487 system_pods.go:89] "kube-scheduler-ha-614518-m03" [db57c26e-9813-4b2b-b70b-0a07ed119aaa] Running
	I1216 07:06:05.092438 1687487 system_pods.go:89] "kube-vip-ha-614518" [e7bcfc9a-42b0-4066-9bb1-4abf917e98b9] Running
	I1216 07:06:05.092445 1687487 system_pods.go:89] "kube-vip-ha-614518-m02" [e662027d-d25a-4273-bdb7-9e21f666839e] Running
	I1216 07:06:05.092449 1687487 system_pods.go:89] "kube-vip-ha-614518-m03" [edab6af2-c513-479d-a2c8-c474380ca5d9] Running
	I1216 07:06:05.092455 1687487 system_pods.go:89] "storage-provisioner" [c8b9c00b-10bc-423c-b16e-3f3cdb12e907] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 07:06:05.092495 1687487 system_pods.go:126] duration metric: took 7.68911ms to wait for k8s-apps to be running ...
	I1216 07:06:05.092507 1687487 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 07:06:05.092570 1687487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 07:06:05.107026 1687487 system_svc.go:56] duration metric: took 14.508711ms WaitForService to wait for kubelet
	I1216 07:06:05.107098 1687487 kubeadm.go:587] duration metric: took 30.859823393s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 07:06:05.107133 1687487 node_conditions.go:102] verifying NodePressure condition ...
	I1216 07:06:05.110974 1687487 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1216 07:06:05.111054 1687487 node_conditions.go:123] node cpu capacity is 2
	I1216 07:06:05.111086 1687487 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1216 07:06:05.111110 1687487 node_conditions.go:123] node cpu capacity is 2
	I1216 07:06:05.111145 1687487 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1216 07:06:05.111170 1687487 node_conditions.go:123] node cpu capacity is 2
	I1216 07:06:05.111190 1687487 node_conditions.go:105] duration metric: took 4.037891ms to run NodePressure ...
	I1216 07:06:05.111216 1687487 start.go:242] waiting for startup goroutines ...
	I1216 07:06:05.111269 1687487 start.go:256] writing updated cluster config ...
	I1216 07:06:05.116668 1687487 out.go:203] 
	I1216 07:06:05.120812 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:06:05.120934 1687487 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/config.json ...
	I1216 07:06:05.124552 1687487 out.go:179] * Starting "ha-614518-m04" worker node in "ha-614518" cluster
	I1216 07:06:05.128339 1687487 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 07:06:05.132036 1687487 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 07:06:05.135120 1687487 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 07:06:05.135153 1687487 cache.go:65] Caching tarball of preloaded images
	I1216 07:06:05.135238 1687487 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 07:06:05.135318 1687487 preload.go:238] Found /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 07:06:05.135332 1687487 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 07:06:05.135455 1687487 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/config.json ...
	I1216 07:06:05.157793 1687487 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 07:06:05.157815 1687487 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 07:06:05.157833 1687487 cache.go:243] Successfully downloaded all kic artifacts
	I1216 07:06:05.157859 1687487 start.go:360] acquireMachinesLock for ha-614518-m04: {Name:mk43a7770b67c048f75b229b4d32a0d7d442337b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 07:06:05.157933 1687487 start.go:364] duration metric: took 53.449µs to acquireMachinesLock for "ha-614518-m04"
	I1216 07:06:05.157958 1687487 start.go:96] Skipping create...Using existing machine configuration
	I1216 07:06:05.157970 1687487 fix.go:54] fixHost starting: m04
	I1216 07:06:05.158264 1687487 cli_runner.go:164] Run: docker container inspect ha-614518-m04 --format={{.State.Status}}
	I1216 07:06:05.178507 1687487 fix.go:112] recreateIfNeeded on ha-614518-m04: state=Stopped err=<nil>
	W1216 07:06:05.178535 1687487 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 07:06:05.182229 1687487 out.go:252] * Restarting existing docker container for "ha-614518-m04" ...
	I1216 07:06:05.182326 1687487 cli_runner.go:164] Run: docker start ha-614518-m04
	I1216 07:06:05.490568 1687487 cli_runner.go:164] Run: docker container inspect ha-614518-m04 --format={{.State.Status}}
	I1216 07:06:05.514214 1687487 kic.go:430] container "ha-614518-m04" state is running.
	I1216 07:06:05.514594 1687487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614518-m04
	I1216 07:06:05.536033 1687487 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/config.json ...
	I1216 07:06:05.536263 1687487 machine.go:94] provisionDockerMachine start ...
	I1216 07:06:05.536336 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m04
	I1216 07:06:05.566891 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:06:05.567347 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34320 <nil> <nil>}
	I1216 07:06:05.567367 1687487 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 07:06:05.568162 1687487 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1216 07:06:08.712253 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-614518-m04
	
	I1216 07:06:08.712286 1687487 ubuntu.go:182] provisioning hostname "ha-614518-m04"
	I1216 07:06:08.712350 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m04
	I1216 07:06:08.732562 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:06:08.732911 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34320 <nil> <nil>}
	I1216 07:06:08.732931 1687487 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-614518-m04 && echo "ha-614518-m04" | sudo tee /etc/hostname
	I1216 07:06:08.889442 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-614518-m04
	
	I1216 07:06:08.889531 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m04
	I1216 07:06:08.909382 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:06:08.909721 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34320 <nil> <nil>}
	I1216 07:06:08.909743 1687487 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-614518-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-614518-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-614518-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 07:06:09.077198 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 07:06:09.077226 1687487 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-1596013/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-1596013/.minikube}
	I1216 07:06:09.077243 1687487 ubuntu.go:190] setting up certificates
	I1216 07:06:09.077252 1687487 provision.go:84] configureAuth start
	I1216 07:06:09.077348 1687487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614518-m04
	I1216 07:06:09.099011 1687487 provision.go:143] copyHostCerts
	I1216 07:06:09.099061 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 07:06:09.099099 1687487 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem, removing ...
	I1216 07:06:09.099113 1687487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 07:06:09.099193 1687487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem (1675 bytes)
	I1216 07:06:09.099292 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 07:06:09.099317 1687487 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem, removing ...
	I1216 07:06:09.099324 1687487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 07:06:09.099359 1687487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem (1078 bytes)
	I1216 07:06:09.099417 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 07:06:09.099439 1687487 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem, removing ...
	I1216 07:06:09.099448 1687487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 07:06:09.099477 1687487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem (1123 bytes)
	I1216 07:06:09.099540 1687487 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem org=jenkins.ha-614518-m04 san=[127.0.0.1 192.168.49.5 ha-614518-m04 localhost minikube]
	I1216 07:06:09.342772 1687487 provision.go:177] copyRemoteCerts
	I1216 07:06:09.342883 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 07:06:09.342952 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m04
	I1216 07:06:09.362064 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34320 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518-m04/id_rsa Username:docker}
	I1216 07:06:09.461352 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1216 07:06:09.461413 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 07:06:09.488306 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1216 07:06:09.488377 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1216 07:06:09.511681 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1216 07:06:09.511745 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 07:06:09.532372 1687487 provision.go:87] duration metric: took 455.10562ms to configureAuth
	I1216 07:06:09.532402 1687487 ubuntu.go:206] setting minikube options for container-runtime
	I1216 07:06:09.532749 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:06:09.532862 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m04
	I1216 07:06:09.550583 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:06:09.550921 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34320 <nil> <nil>}
	I1216 07:06:09.550942 1687487 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 07:06:09.906062 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 07:06:09.906129 1687487 machine.go:97] duration metric: took 4.369846916s to provisionDockerMachine
	I1216 07:06:09.906156 1687487 start.go:293] postStartSetup for "ha-614518-m04" (driver="docker")
	I1216 07:06:09.906186 1687487 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 07:06:09.906302 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 07:06:09.906394 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m04
	I1216 07:06:09.928571 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34320 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518-m04/id_rsa Username:docker}
	I1216 07:06:10.043685 1687487 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 07:06:10.067794 1687487 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 07:06:10.067836 1687487 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 07:06:10.067850 1687487 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/addons for local assets ...
	I1216 07:06:10.067926 1687487 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/files for local assets ...
	I1216 07:06:10.068023 1687487 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> 15992552.pem in /etc/ssl/certs
	I1216 07:06:10.068034 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> /etc/ssl/certs/15992552.pem
	I1216 07:06:10.068175 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 07:06:10.080979 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 07:06:10.111023 1687487 start.go:296] duration metric: took 204.832511ms for postStartSetup
	I1216 07:06:10.111182 1687487 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 07:06:10.111258 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m04
	I1216 07:06:10.133434 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34320 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518-m04/id_rsa Username:docker}
	I1216 07:06:10.243926 1687487 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 07:06:10.252839 1687487 fix.go:56] duration metric: took 5.094861586s for fixHost
	I1216 07:06:10.252868 1687487 start.go:83] releasing machines lock for "ha-614518-m04", held for 5.094922297s
	I1216 07:06:10.252940 1687487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614518-m04
	I1216 07:06:10.273934 1687487 out.go:179] * Found network options:
	I1216 07:06:10.276892 1687487 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1216 07:06:10.279702 1687487 proxy.go:120] fail to check proxy env: Error ip not in block
	W1216 07:06:10.279739 1687487 proxy.go:120] fail to check proxy env: Error ip not in block
	W1216 07:06:10.279765 1687487 proxy.go:120] fail to check proxy env: Error ip not in block
	W1216 07:06:10.279776 1687487 proxy.go:120] fail to check proxy env: Error ip not in block
	I1216 07:06:10.279853 1687487 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 07:06:10.279897 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m04
	I1216 07:06:10.280186 1687487 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 07:06:10.280250 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m04
	I1216 07:06:10.304141 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34320 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518-m04/id_rsa Username:docker}
	I1216 07:06:10.316532 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34320 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518-m04/id_rsa Username:docker}
	I1216 07:06:10.464790 1687487 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 07:06:10.529284 1687487 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 07:06:10.529353 1687487 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 07:06:10.550769 1687487 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 07:06:10.550846 1687487 start.go:496] detecting cgroup driver to use...
	I1216 07:06:10.550924 1687487 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 07:06:10.551036 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 07:06:10.576598 1687487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 07:06:10.598097 1687487 docker.go:218] disabling cri-docker service (if available) ...
	I1216 07:06:10.598259 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 07:06:10.618172 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 07:06:10.634284 1687487 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 07:06:10.768085 1687487 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 07:06:10.900504 1687487 docker.go:234] disabling docker service ...
	I1216 07:06:10.900581 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 07:06:10.927152 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 07:06:10.942383 1687487 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 07:06:11.076847 1687487 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 07:06:11.223349 1687487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 07:06:11.239694 1687487 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 07:06:11.255054 1687487 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 07:06:11.255145 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:06:11.266034 1687487 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 07:06:11.266152 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:06:11.276524 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:06:11.286271 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:06:11.297358 1687487 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 07:06:11.307624 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:06:11.322735 1687487 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:06:11.331594 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:06:11.341363 1687487 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 07:06:11.355843 1687487 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 07:06:11.364696 1687487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 07:06:11.491229 1687487 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 07:06:11.671501 1687487 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 07:06:11.671633 1687487 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 07:06:11.675428 1687487 start.go:564] Will wait 60s for crictl version
	I1216 07:06:11.675526 1687487 ssh_runner.go:195] Run: which crictl
	I1216 07:06:11.679282 1687487 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 07:06:11.704854 1687487 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 07:06:11.704992 1687487 ssh_runner.go:195] Run: crio --version
	I1216 07:06:11.737456 1687487 ssh_runner.go:195] Run: crio --version
	I1216 07:06:11.775396 1687487 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 07:06:11.778421 1687487 out.go:179]   - env NO_PROXY=192.168.49.2
	I1216 07:06:11.781653 1687487 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1216 07:06:11.784682 1687487 cli_runner.go:164] Run: docker network inspect ha-614518 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 07:06:11.801080 1687487 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 07:06:11.805027 1687487 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 07:06:11.815307 1687487 mustload.go:66] Loading cluster: ha-614518
	I1216 07:06:11.815555 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:06:11.815814 1687487 cli_runner.go:164] Run: docker container inspect ha-614518 --format={{.State.Status}}
	I1216 07:06:11.835520 1687487 host.go:66] Checking if "ha-614518" exists ...
	I1216 07:06:11.835825 1687487 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518 for IP: 192.168.49.5
	I1216 07:06:11.835840 1687487 certs.go:195] generating shared ca certs ...
	I1216 07:06:11.835857 1687487 certs.go:227] acquiring lock for ca certs: {Name:mkbf72d2e438185e2867d262e148d82e5455cccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:06:11.835999 1687487 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key
	I1216 07:06:11.836046 1687487 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key
	I1216 07:06:11.836063 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1216 07:06:11.836076 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1216 07:06:11.836096 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1216 07:06:11.836113 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1216 07:06:11.836166 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem (1338 bytes)
	W1216 07:06:11.836212 1687487 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255_empty.pem, impossibly tiny 0 bytes
	I1216 07:06:11.836243 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 07:06:11.836281 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem (1078 bytes)
	I1216 07:06:11.836313 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem (1123 bytes)
	I1216 07:06:11.836348 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem (1675 bytes)
	I1216 07:06:11.836418 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 07:06:11.836451 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:06:11.836505 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem -> /usr/share/ca-certificates/1599255.pem
	I1216 07:06:11.836521 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> /usr/share/ca-certificates/15992552.pem
	I1216 07:06:11.836544 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 07:06:11.859722 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 07:06:11.879459 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 07:06:11.899359 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 07:06:11.925816 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 07:06:11.944678 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem --> /usr/share/ca-certificates/1599255.pem (1338 bytes)
	I1216 07:06:11.966397 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /usr/share/ca-certificates/15992552.pem (1708 bytes)
	I1216 07:06:11.991349 1687487 ssh_runner.go:195] Run: openssl version
	I1216 07:06:11.998038 1687487 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:06:12.010525 1687487 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 07:06:12.021207 1687487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:06:12.026113 1687487 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:06:12.026229 1687487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:06:12.070208 1687487 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 07:06:12.077832 1687487 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1599255.pem
	I1216 07:06:12.085281 1687487 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1599255.pem /etc/ssl/certs/1599255.pem
	I1216 07:06:12.093355 1687487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1599255.pem
	I1216 07:06:12.097389 1687487 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 06:24 /usr/share/ca-certificates/1599255.pem
	I1216 07:06:12.097457 1687487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1599255.pem
	I1216 07:06:12.138619 1687487 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 07:06:12.146494 1687487 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15992552.pem
	I1216 07:06:12.153809 1687487 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15992552.pem /etc/ssl/certs/15992552.pem
	I1216 07:06:12.162460 1687487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15992552.pem
	I1216 07:06:12.166549 1687487 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 06:24 /usr/share/ca-certificates/15992552.pem
	I1216 07:06:12.166660 1687487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15992552.pem
	I1216 07:06:12.214872 1687487 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 07:06:12.223038 1687487 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 07:06:12.226786 1687487 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 07:06:12.226832 1687487 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.2  false true} ...
	I1216 07:06:12.226911 1687487 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-614518-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-614518 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 07:06:12.227009 1687487 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 07:06:12.235141 1687487 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 07:06:12.235238 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1216 07:06:12.243052 1687487 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1216 07:06:12.258163 1687487 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 07:06:12.272841 1687487 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1216 07:06:12.276276 1687487 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 07:06:12.286557 1687487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 07:06:12.414923 1687487 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 07:06:12.430788 1687487 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime: ControlPlane:false Worker:true}
	I1216 07:06:12.431230 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:06:12.434498 1687487 out.go:179] * Verifying Kubernetes components...
	I1216 07:06:12.437537 1687487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 07:06:12.560193 1687487 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 07:06:12.575224 1687487 kapi.go:59] client config for ha-614518: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/client.crt", KeyFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/client.key", CAFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1216 07:06:12.575297 1687487 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1216 07:06:12.575574 1687487 node_ready.go:35] waiting up to 6m0s for node "ha-614518-m04" to be "Ready" ...
	I1216 07:06:12.580068 1687487 node_ready.go:49] node "ha-614518-m04" is "Ready"
	I1216 07:06:12.580146 1687487 node_ready.go:38] duration metric: took 4.550298ms for node "ha-614518-m04" to be "Ready" ...
	I1216 07:06:12.580174 1687487 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 07:06:12.580258 1687487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 07:06:12.596724 1687487 system_svc.go:56] duration metric: took 16.541875ms WaitForService to wait for kubelet
	I1216 07:06:12.596751 1687487 kubeadm.go:587] duration metric: took 165.918494ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 07:06:12.596771 1687487 node_conditions.go:102] verifying NodePressure condition ...
	I1216 07:06:12.600376 1687487 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1216 07:06:12.600404 1687487 node_conditions.go:123] node cpu capacity is 2
	I1216 07:06:12.600416 1687487 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1216 07:06:12.600421 1687487 node_conditions.go:123] node cpu capacity is 2
	I1216 07:06:12.600449 1687487 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1216 07:06:12.600453 1687487 node_conditions.go:123] node cpu capacity is 2
	I1216 07:06:12.600511 1687487 node_conditions.go:105] duration metric: took 3.699966ms to run NodePressure ...
	I1216 07:06:12.600548 1687487 start.go:242] waiting for startup goroutines ...
	I1216 07:06:12.600573 1687487 start.go:256] writing updated cluster config ...
	I1216 07:06:12.600919 1687487 ssh_runner.go:195] Run: rm -f paused
	I1216 07:06:12.604585 1687487 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 07:06:12.605147 1687487 kapi.go:59] client config for ha-614518: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/client.crt", KeyFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/client.key", CAFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 07:06:12.622024 1687487 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-j2dlk" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 07:06:14.630183 1687487 pod_ready.go:104] pod "coredns-66bc5c9577-j2dlk" is not "Ready", error: <nil>
	W1216 07:06:17.128396 1687487 pod_ready.go:104] pod "coredns-66bc5c9577-j2dlk" is not "Ready", error: <nil>
	W1216 07:06:19.129109 1687487 pod_ready.go:104] pod "coredns-66bc5c9577-j2dlk" is not "Ready", error: <nil>
	W1216 07:06:21.129471 1687487 pod_ready.go:104] pod "coredns-66bc5c9577-j2dlk" is not "Ready", error: <nil>
	W1216 07:06:23.629238 1687487 pod_ready.go:104] pod "coredns-66bc5c9577-j2dlk" is not "Ready", error: <nil>
	I1216 07:06:24.644123 1687487 pod_ready.go:94] pod "coredns-66bc5c9577-j2dlk" is "Ready"
	I1216 07:06:24.644155 1687487 pod_ready.go:86] duration metric: took 12.022101955s for pod "coredns-66bc5c9577-j2dlk" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:24.644167 1687487 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wnl5v" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:25.653985 1687487 pod_ready.go:94] pod "coredns-66bc5c9577-wnl5v" is "Ready"
	I1216 07:06:25.654011 1687487 pod_ready.go:86] duration metric: took 1.009837557s for pod "coredns-66bc5c9577-wnl5v" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:25.657436 1687487 pod_ready.go:83] waiting for pod "etcd-ha-614518" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:25.663112 1687487 pod_ready.go:94] pod "etcd-ha-614518" is "Ready"
	I1216 07:06:25.663199 1687487 pod_ready.go:86] duration metric: took 5.737586ms for pod "etcd-ha-614518" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:25.663224 1687487 pod_ready.go:83] waiting for pod "etcd-ha-614518-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:25.668572 1687487 pod_ready.go:94] pod "etcd-ha-614518-m02" is "Ready"
	I1216 07:06:25.668654 1687487 pod_ready.go:86] duration metric: took 5.405889ms for pod "etcd-ha-614518-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:25.668681 1687487 pod_ready.go:83] waiting for pod "etcd-ha-614518-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:25.673835 1687487 pod_ready.go:99] pod "etcd-ha-614518-m03" in "kube-system" namespace is gone: node "ha-614518-m03" hosting pod "etcd-ha-614518-m03" is not found/running (skipping!): nodes "ha-614518-m03" not found
	I1216 07:06:25.673908 1687487 pod_ready.go:86] duration metric: took 5.206207ms for pod "etcd-ha-614518-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:25.823380 1687487 request.go:683] "Waited before sending request" delay="149.293024ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1216 07:06:25.826990 1687487 pod_ready.go:83] waiting for pod "kube-apiserver-ha-614518" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:26.023449 1687487 request.go:683] "Waited before sending request" delay="196.318606ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-614518"
	I1216 07:06:26.223386 1687487 request.go:683] "Waited before sending request" delay="196.351246ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518"
	I1216 07:06:26.226414 1687487 pod_ready.go:94] pod "kube-apiserver-ha-614518" is "Ready"
	I1216 07:06:26.226443 1687487 pod_ready.go:86] duration metric: took 399.426362ms for pod "kube-apiserver-ha-614518" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:26.226454 1687487 pod_ready.go:83] waiting for pod "kube-apiserver-ha-614518-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:26.422838 1687487 request.go:683] "Waited before sending request" delay="196.262613ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-614518-m02"
	I1216 07:06:26.623137 1687487 request.go:683] "Waited before sending request" delay="197.08654ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518-m02"
	I1216 07:06:26.626398 1687487 pod_ready.go:94] pod "kube-apiserver-ha-614518-m02" is "Ready"
	I1216 07:06:26.626428 1687487 pod_ready.go:86] duration metric: took 399.966937ms for pod "kube-apiserver-ha-614518-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:26.626438 1687487 pod_ready.go:83] waiting for pod "kube-apiserver-ha-614518-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:26.822787 1687487 request.go:683] "Waited before sending request" delay="196.265148ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-614518-m03"
	I1216 07:06:27.023430 1687487 request.go:683] "Waited before sending request" delay="197.365ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518-m03"
	I1216 07:06:27.026875 1687487 pod_ready.go:99] pod "kube-apiserver-ha-614518-m03" in "kube-system" namespace is gone: node "ha-614518-m03" hosting pod "kube-apiserver-ha-614518-m03" is not found/running (skipping!): nodes "ha-614518-m03" not found
	I1216 07:06:27.026914 1687487 pod_ready.go:86] duration metric: took 400.4598ms for pod "kube-apiserver-ha-614518-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:27.223376 1687487 request.go:683] "Waited before sending request" delay="196.348931ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1216 07:06:27.227355 1687487 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-614518" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:27.423607 1687487 request.go:683] "Waited before sending request" delay="196.15765ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-614518"
	I1216 07:06:27.623198 1687487 request.go:683] "Waited before sending request" delay="196.252798ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518"
	I1216 07:06:27.822756 1687487 request.go:683] "Waited before sending request" delay="94.181569ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-614518"
	I1216 07:06:28.023498 1687487 request.go:683] "Waited before sending request" delay="197.337742ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518"
	I1216 07:06:28.423277 1687487 request.go:683] "Waited before sending request" delay="191.324919ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518"
	I1216 07:06:28.823130 1687487 request.go:683] "Waited before sending request" delay="90.229358ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518"
	W1216 07:06:29.235219 1687487 pod_ready.go:104] pod "kube-controller-manager-ha-614518" is not "Ready", error: <nil>
	W1216 07:06:31.235951 1687487 pod_ready.go:104] pod "kube-controller-manager-ha-614518" is not "Ready", error: <nil>
	W1216 07:06:33.734756 1687487 pod_ready.go:104] pod "kube-controller-manager-ha-614518" is not "Ready", error: <nil>
	W1216 07:06:35.735390 1687487 pod_ready.go:104] pod "kube-controller-manager-ha-614518" is not "Ready", error: <nil>
	W1216 07:06:38.234527 1687487 pod_ready.go:104] pod "kube-controller-manager-ha-614518" is not "Ready", error: <nil>
	W1216 07:06:40.734172 1687487 pod_ready.go:104] pod "kube-controller-manager-ha-614518" is not "Ready", error: <nil>
	W1216 07:06:42.734590 1687487 pod_ready.go:104] pod "kube-controller-manager-ha-614518" is not "Ready", error: <nil>
	I1216 07:06:43.234658 1687487 pod_ready.go:94] pod "kube-controller-manager-ha-614518" is "Ready"
	I1216 07:06:43.234687 1687487 pod_ready.go:86] duration metric: took 16.007305361s for pod "kube-controller-manager-ha-614518" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.234697 1687487 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-614518-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.246154 1687487 pod_ready.go:94] pod "kube-controller-manager-ha-614518-m02" is "Ready"
	I1216 07:06:43.246184 1687487 pod_ready.go:86] duration metric: took 11.479167ms for pod "kube-controller-manager-ha-614518-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.246194 1687487 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-614518-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.251708 1687487 pod_ready.go:99] pod "kube-controller-manager-ha-614518-m03" in "kube-system" namespace is gone: node "ha-614518-m03" hosting pod "kube-controller-manager-ha-614518-m03" is not found/running (skipping!): nodes "ha-614518-m03" not found
	I1216 07:06:43.251789 1687487 pod_ready.go:86] duration metric: took 5.587232ms for pod "kube-controller-manager-ha-614518-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.255005 1687487 pod_ready.go:83] waiting for pod "kube-proxy-4kdt5" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.260772 1687487 pod_ready.go:94] pod "kube-proxy-4kdt5" is "Ready"
	I1216 07:06:43.260800 1687487 pod_ready.go:86] duration metric: took 5.764523ms for pod "kube-proxy-4kdt5" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.260811 1687487 pod_ready.go:83] waiting for pod "kube-proxy-bmxpt" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.427957 1687487 request.go:683] "Waited before sending request" delay="164.183098ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518-m04"
	I1216 07:06:43.431695 1687487 pod_ready.go:94] pod "kube-proxy-bmxpt" is "Ready"
	I1216 07:06:43.431727 1687487 pod_ready.go:86] duration metric: took 170.908436ms for pod "kube-proxy-bmxpt" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.431744 1687487 pod_ready.go:83] waiting for pod "kube-proxy-fhwcs" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.628038 1687487 request.go:683] "Waited before sending request" delay="196.208729ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fhwcs"
	I1216 07:06:43.827976 1687487 request.go:683] "Waited before sending request" delay="196.30094ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518-m02"
	I1216 07:06:43.837294 1687487 pod_ready.go:94] pod "kube-proxy-fhwcs" is "Ready"
	I1216 07:06:43.837327 1687487 pod_ready.go:86] duration metric: took 405.576793ms for pod "kube-proxy-fhwcs" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.837339 1687487 pod_ready.go:83] waiting for pod "kube-proxy-qqr57" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:44.028582 1687487 request.go:683] "Waited before sending request" delay="191.164568ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qqr57"
	I1216 07:06:44.031704 1687487 pod_ready.go:99] pod "kube-proxy-qqr57" in "kube-system" namespace is gone: getting pod "kube-proxy-qqr57" in "kube-system" namespace (will retry): pods "kube-proxy-qqr57" not found
	I1216 07:06:44.031728 1687487 pod_ready.go:86] duration metric: took 194.382484ms for pod "kube-proxy-qqr57" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:44.228023 1687487 request.go:683] "Waited before sending request" delay="196.190299ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I1216 07:06:44.234797 1687487 pod_ready.go:83] waiting for pod "kube-scheduler-ha-614518" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:44.428282 1687487 request.go:683] "Waited before sending request" delay="193.336711ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-614518"
	I1216 07:06:44.627997 1687487 request.go:683] "Waited before sending request" delay="196.267207ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518"
	I1216 07:06:44.631577 1687487 pod_ready.go:94] pod "kube-scheduler-ha-614518" is "Ready"
	I1216 07:06:44.631604 1687487 pod_ready.go:86] duration metric: took 396.729655ms for pod "kube-scheduler-ha-614518" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:44.631613 1687487 pod_ready.go:83] waiting for pod "kube-scheduler-ha-614518-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:44.828815 1687487 request.go:683] "Waited before sending request" delay="197.130733ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-614518-m02"
	I1216 07:06:45.028338 1687487 request.go:683] "Waited before sending request" delay="191.46624ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518-m02"
	I1216 07:06:45.228724 1687487 request.go:683] "Waited before sending request" delay="96.318053ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-614518-m02"
	I1216 07:06:45.428563 1687487 request.go:683] "Waited before sending request" delay="191.750075ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518-m02"
	I1216 07:06:45.828353 1687487 request.go:683] "Waited before sending request" delay="192.34026ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518-m02"
	I1216 07:06:46.228325 1687487 request.go:683] "Waited before sending request" delay="93.248724ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518-m02"
	W1216 07:06:46.637948 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:06:49.139119 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:06:51.638109 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:06:53.638454 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:06:56.139011 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:06:58.638095 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:00.638769 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:03.139265 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:05.638593 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:07.638799 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:10.138642 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:12.638602 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:14.641618 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:17.139071 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:19.638792 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:22.138682 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:24.143581 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:26.637942 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:28.638514 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:30.639228 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:32.639571 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:35.139503 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:37.142108 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:39.637866 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:41.638931 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:44.139294 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:46.638205 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:48.638829 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:50.643744 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:53.139962 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:55.140229 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:57.638356 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:00.161064 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:02.638288 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:04.640454 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:07.138771 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:09.638023 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:11.638274 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:13.638989 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:16.137649 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:18.138649 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:20.138856 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:22.638044 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:25.139148 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:27.638438 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:29.638561 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:31.638878 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:34.138583 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:36.638791 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:39.138672 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:41.143386 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:43.638185 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:45.640021 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:48.137933 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:50.638587 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:53.138384 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:55.138692 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:57.638524 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:00.191960 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:02.638290 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:04.639287 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:07.139404 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:09.638715 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:12.137968 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:14.138290 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:16.138420 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:18.638585 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:20.639656 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:23.138623 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:25.638409 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:27.643066 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:30.140779 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:32.638747 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:34.639250 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:37.137644 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:39.138045 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:41.138733 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:43.139171 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:45.142012 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:47.638719 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:50.139130 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:52.637794 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:54.638451 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:57.137807 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:59.640347 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:10:02.138615 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:10:04.140843 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:10:06.639153 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:10:09.139049 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:10:11.139172 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	I1216 07:10:12.605718 1687487 pod_ready.go:86] duration metric: took 3m27.974087596s for pod "kube-scheduler-ha-614518-m02" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 07:10:12.605749 1687487 pod_ready.go:65] not all pods in "kube-system" namespace with "component=kube-scheduler" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1216 07:10:12.605764 1687487 pod_ready.go:40] duration metric: took 4m0.001147095s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 07:10:12.608877 1687487 out.go:203] 
	W1216 07:10:12.611764 1687487 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1216 07:10:12.614690 1687487 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-linux-arm64 -p ha-614518 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect ha-614518
helpers_test.go:244: (dbg) docker inspect ha-614518:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e2503ac81b82256526f5aa49d6145c5c534bc177f13530507608bbd038a0fb46",
	        "Created": "2025-12-16T06:55:15.920807949Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1687611,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T07:03:45.310819447Z",
	            "FinishedAt": "2025-12-16T07:03:44.437347575Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/e2503ac81b82256526f5aa49d6145c5c534bc177f13530507608bbd038a0fb46/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e2503ac81b82256526f5aa49d6145c5c534bc177f13530507608bbd038a0fb46/hostname",
	        "HostsPath": "/var/lib/docker/containers/e2503ac81b82256526f5aa49d6145c5c534bc177f13530507608bbd038a0fb46/hosts",
	        "LogPath": "/var/lib/docker/containers/e2503ac81b82256526f5aa49d6145c5c534bc177f13530507608bbd038a0fb46/e2503ac81b82256526f5aa49d6145c5c534bc177f13530507608bbd038a0fb46-json.log",
	        "Name": "/ha-614518",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-614518:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-614518",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e2503ac81b82256526f5aa49d6145c5c534bc177f13530507608bbd038a0fb46",
	                "LowerDir": "/var/lib/docker/overlay2/04f114c45138ebdd19c57b7c35226a13895bf218ac7fbb3e830bb8c8d7681245-init/diff:/var/lib/docker/overlay2/bf9e5e3f04a34ae52d17b5e81aeacb3854428b2bda7b4fcb7e1d86558db759ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/04f114c45138ebdd19c57b7c35226a13895bf218ac7fbb3e830bb8c8d7681245/merged",
	                "UpperDir": "/var/lib/docker/overlay2/04f114c45138ebdd19c57b7c35226a13895bf218ac7fbb3e830bb8c8d7681245/diff",
	                "WorkDir": "/var/lib/docker/overlay2/04f114c45138ebdd19c57b7c35226a13895bf218ac7fbb3e830bb8c8d7681245/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-614518",
	                "Source": "/var/lib/docker/volumes/ha-614518/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-614518",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-614518",
	                "name.minikube.sigs.k8s.io": "ha-614518",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "84d9c6998ba47bdb877c4913d6988c8320c2f46bb6d33489550ea4eb54ae2b9c",
	            "SandboxKey": "/var/run/docker/netns/84d9c6998ba4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34310"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34311"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34314"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34312"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34313"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-614518": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:8c:71:16:ba:ca",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "34c8049a560aca568d8e67043aef245d26603d1e6b5021bc9413fe96f5cfa4f6",
	                    "EndpointID": "128f0ab3a1ff878dc623fde0aadf19698e2b387b41dbec7082d4a76b9a429095",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-614518",
	                        "e2503ac81b82"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-614518 -n ha-614518
helpers_test.go:253: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p ha-614518 logs -n 25: (1.608039121s)
helpers_test.go:261: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-614518 cp ha-614518-m03:/home/docker/cp-test.txt ha-614518-m04:/home/docker/cp-test_ha-614518-m03_ha-614518-m04.txt               │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 06:59 UTC │ 16 Dec 25 06:59 UTC │
	│ ssh     │ ha-614518 ssh -n ha-614518-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 06:59 UTC │ 16 Dec 25 06:59 UTC │
	│ ssh     │ ha-614518 ssh -n ha-614518-m04 sudo cat /home/docker/cp-test_ha-614518-m03_ha-614518-m04.txt                                         │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 06:59 UTC │ 16 Dec 25 06:59 UTC │
	│ cp      │ ha-614518 cp testdata/cp-test.txt ha-614518-m04:/home/docker/cp-test.txt                                                             │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 06:59 UTC │ 16 Dec 25 06:59 UTC │
	│ ssh     │ ha-614518 ssh -n ha-614518-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 06:59 UTC │ 16 Dec 25 06:59 UTC │
	│ cp      │ ha-614518 cp ha-614518-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1403810740/001/cp-test_ha-614518-m04.txt │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 06:59 UTC │ 16 Dec 25 06:59 UTC │
	│ ssh     │ ha-614518 ssh -n ha-614518-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 06:59 UTC │ 16 Dec 25 06:59 UTC │
	│ cp      │ ha-614518 cp ha-614518-m04:/home/docker/cp-test.txt ha-614518:/home/docker/cp-test_ha-614518-m04_ha-614518.txt                       │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 06:59 UTC │ 16 Dec 25 06:59 UTC │
	│ ssh     │ ha-614518 ssh -n ha-614518-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 06:59 UTC │ 16 Dec 25 06:59 UTC │
	│ ssh     │ ha-614518 ssh -n ha-614518 sudo cat /home/docker/cp-test_ha-614518-m04_ha-614518.txt                                                 │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 06:59 UTC │ 16 Dec 25 06:59 UTC │
	│ cp      │ ha-614518 cp ha-614518-m04:/home/docker/cp-test.txt ha-614518-m02:/home/docker/cp-test_ha-614518-m04_ha-614518-m02.txt               │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 06:59 UTC │ 16 Dec 25 07:00 UTC │
	│ ssh     │ ha-614518 ssh -n ha-614518-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:00 UTC │ 16 Dec 25 07:00 UTC │
	│ ssh     │ ha-614518 ssh -n ha-614518-m02 sudo cat /home/docker/cp-test_ha-614518-m04_ha-614518-m02.txt                                         │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:00 UTC │ 16 Dec 25 07:00 UTC │
	│ cp      │ ha-614518 cp ha-614518-m04:/home/docker/cp-test.txt ha-614518-m03:/home/docker/cp-test_ha-614518-m04_ha-614518-m03.txt               │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:00 UTC │ 16 Dec 25 07:00 UTC │
	│ ssh     │ ha-614518 ssh -n ha-614518-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:00 UTC │ 16 Dec 25 07:00 UTC │
	│ ssh     │ ha-614518 ssh -n ha-614518-m03 sudo cat /home/docker/cp-test_ha-614518-m04_ha-614518-m03.txt                                         │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:00 UTC │ 16 Dec 25 07:00 UTC │
	│ node    │ ha-614518 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:00 UTC │ 16 Dec 25 07:00 UTC │
	│ node    │ ha-614518 node start m02 --alsologtostderr -v 5                                                                                      │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:00 UTC │ 16 Dec 25 07:00 UTC │
	│ node    │ ha-614518 node list --alsologtostderr -v 5                                                                                           │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:00 UTC │                     │
	│ stop    │ ha-614518 stop --alsologtostderr -v 5                                                                                                │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:00 UTC │ 16 Dec 25 07:01 UTC │
	│ start   │ ha-614518 start --wait true --alsologtostderr -v 5                                                                                   │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:01 UTC │ 16 Dec 25 07:02 UTC │
	│ node    │ ha-614518 node list --alsologtostderr -v 5                                                                                           │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:02 UTC │                     │
	│ node    │ ha-614518 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:02 UTC │ 16 Dec 25 07:03 UTC │
	│ stop    │ ha-614518 stop --alsologtostderr -v 5                                                                                                │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:03 UTC │ 16 Dec 25 07:03 UTC │
	│ start   │ ha-614518 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:03 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 07:03:44
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 07:03:44.880217 1687487 out.go:360] Setting OutFile to fd 1 ...
	I1216 07:03:44.880366 1687487 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 07:03:44.880378 1687487 out.go:374] Setting ErrFile to fd 2...
	I1216 07:03:44.880384 1687487 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 07:03:44.880665 1687487 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 07:03:44.881079 1687487 out.go:368] Setting JSON to false
	I1216 07:03:44.882032 1687487 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":35176,"bootTime":1765833449,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1216 07:03:44.882105 1687487 start.go:143] virtualization:  
	I1216 07:03:44.885307 1687487 out.go:179] * [ha-614518] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 07:03:44.889019 1687487 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 07:03:44.889105 1687487 notify.go:221] Checking for updates...
	I1216 07:03:44.894878 1687487 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 07:03:44.897985 1687487 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 07:03:44.900761 1687487 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube
	I1216 07:03:44.903578 1687487 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 07:03:44.906467 1687487 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 07:03:44.909985 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:03:44.910567 1687487 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 07:03:44.945233 1687487 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 07:03:44.945374 1687487 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 07:03:45.031657 1687487 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-16 07:03:45.011244188 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 07:03:45.031829 1687487 docker.go:319] overlay module found
	I1216 07:03:45.037435 1687487 out.go:179] * Using the docker driver based on existing profile
	I1216 07:03:45.040996 1687487 start.go:309] selected driver: docker
	I1216 07:03:45.041023 1687487 start.go:927] validating driver "docker" against &{Name:ha-614518 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-614518 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 07:03:45.041175 1687487 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 07:03:45.041288 1687487 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 07:03:45.134661 1687487 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-16 07:03:45.119026433 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 07:03:45.135091 1687487 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 07:03:45.135120 1687487 cni.go:84] Creating CNI manager for ""
	I1216 07:03:45.135176 1687487 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1216 07:03:45.135234 1687487 start.go:353] cluster config:
	{Name:ha-614518 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-614518 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 07:03:45.149972 1687487 out.go:179] * Starting "ha-614518" primary control-plane node in "ha-614518" cluster
	I1216 07:03:45.153136 1687487 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 07:03:45.159266 1687487 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 07:03:45.170928 1687487 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 07:03:45.170953 1687487 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 07:03:45.171004 1687487 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1216 07:03:45.171018 1687487 cache.go:65] Caching tarball of preloaded images
	I1216 07:03:45.171117 1687487 preload.go:238] Found /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 07:03:45.171128 1687487 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 07:03:45.171285 1687487 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/config.json ...
	I1216 07:03:45.215544 1687487 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 07:03:45.215626 1687487 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 07:03:45.215662 1687487 cache.go:243] Successfully downloaded all kic artifacts
	I1216 07:03:45.215843 1687487 start.go:360] acquireMachinesLock for ha-614518: {Name:mk3b1063af1f3d64814d71b86469148e674fab2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 07:03:45.216121 1687487 start.go:364] duration metric: took 138.127µs to acquireMachinesLock for "ha-614518"
	I1216 07:03:45.216289 1687487 start.go:96] Skipping create...Using existing machine configuration
	I1216 07:03:45.216367 1687487 fix.go:54] fixHost starting: 
	I1216 07:03:45.217861 1687487 cli_runner.go:164] Run: docker container inspect ha-614518 --format={{.State.Status}}
	I1216 07:03:45.257760 1687487 fix.go:112] recreateIfNeeded on ha-614518: state=Stopped err=<nil>
	W1216 07:03:45.257825 1687487 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 07:03:45.263736 1687487 out.go:252] * Restarting existing docker container for "ha-614518" ...
	I1216 07:03:45.263878 1687487 cli_runner.go:164] Run: docker start ha-614518
	I1216 07:03:45.543794 1687487 cli_runner.go:164] Run: docker container inspect ha-614518 --format={{.State.Status}}
	I1216 07:03:45.563314 1687487 kic.go:430] container "ha-614518" state is running.
	I1216 07:03:45.563689 1687487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614518
	I1216 07:03:45.584894 1687487 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/config.json ...
	I1216 07:03:45.585139 1687487 machine.go:94] provisionDockerMachine start ...
	I1216 07:03:45.585210 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518
	I1216 07:03:45.605415 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:03:45.606022 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34310 <nil> <nil>}
	I1216 07:03:45.606037 1687487 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 07:03:45.607343 1687487 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36692->127.0.0.1:34310: read: connection reset by peer
	I1216 07:03:48.740166 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-614518
	
	I1216 07:03:48.740200 1687487 ubuntu.go:182] provisioning hostname "ha-614518"
	I1216 07:03:48.740337 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518
	I1216 07:03:48.763945 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:03:48.764266 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34310 <nil> <nil>}
	I1216 07:03:48.764282 1687487 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-614518 && echo "ha-614518" | sudo tee /etc/hostname
	I1216 07:03:48.905449 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-614518
	
	I1216 07:03:48.905536 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518
	I1216 07:03:48.922159 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:03:48.922475 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34310 <nil> <nil>}
	I1216 07:03:48.922498 1687487 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-614518' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-614518/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-614518' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 07:03:49.056835 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 07:03:49.056862 1687487 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-1596013/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-1596013/.minikube}
	I1216 07:03:49.056897 1687487 ubuntu.go:190] setting up certificates
	I1216 07:03:49.056913 1687487 provision.go:84] configureAuth start
	I1216 07:03:49.056990 1687487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614518
	I1216 07:03:49.074475 1687487 provision.go:143] copyHostCerts
	I1216 07:03:49.074521 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 07:03:49.074564 1687487 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem, removing ...
	I1216 07:03:49.074584 1687487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 07:03:49.074664 1687487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem (1078 bytes)
	I1216 07:03:49.074753 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 07:03:49.074776 1687487 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem, removing ...
	I1216 07:03:49.074785 1687487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 07:03:49.074812 1687487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem (1123 bytes)
	I1216 07:03:49.074873 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 07:03:49.074892 1687487 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem, removing ...
	I1216 07:03:49.074902 1687487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 07:03:49.074929 1687487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem (1675 bytes)
	I1216 07:03:49.074985 1687487 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem org=jenkins.ha-614518 san=[127.0.0.1 192.168.49.2 ha-614518 localhost minikube]
	I1216 07:03:49.677070 1687487 provision.go:177] copyRemoteCerts
	I1216 07:03:49.677146 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 07:03:49.677189 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518
	I1216 07:03:49.696012 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34310 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518/id_rsa Username:docker}
	I1216 07:03:49.796234 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1216 07:03:49.796294 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 07:03:49.813987 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1216 07:03:49.814051 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1216 07:03:49.832994 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1216 07:03:49.833117 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 07:03:49.852358 1687487 provision.go:87] duration metric: took 795.417685ms to configureAuth
	I1216 07:03:49.852395 1687487 ubuntu.go:206] setting minikube options for container-runtime
	I1216 07:03:49.852668 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:03:49.852778 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518
	I1216 07:03:49.870814 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:03:49.871144 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34310 <nil> <nil>}
	I1216 07:03:49.871168 1687487 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 07:03:50.263536 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 07:03:50.263563 1687487 machine.go:97] duration metric: took 4.678406656s to provisionDockerMachine
	I1216 07:03:50.263587 1687487 start.go:293] postStartSetup for "ha-614518" (driver="docker")
	I1216 07:03:50.263599 1687487 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 07:03:50.263688 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 07:03:50.263741 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518
	I1216 07:03:50.288161 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34310 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518/id_rsa Username:docker}
	I1216 07:03:50.388424 1687487 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 07:03:50.391627 1687487 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 07:03:50.391661 1687487 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 07:03:50.391673 1687487 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/addons for local assets ...
	I1216 07:03:50.391729 1687487 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/files for local assets ...
	I1216 07:03:50.391823 1687487 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> 15992552.pem in /etc/ssl/certs
	I1216 07:03:50.391835 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> /etc/ssl/certs/15992552.pem
	I1216 07:03:50.391942 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 07:03:50.399136 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 07:03:50.417106 1687487 start.go:296] duration metric: took 153.503323ms for postStartSetup
	I1216 07:03:50.417188 1687487 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 07:03:50.417231 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518
	I1216 07:03:50.433965 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34310 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518/id_rsa Username:docker}
	I1216 07:03:50.525944 1687487 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 07:03:50.531286 1687487 fix.go:56] duration metric: took 5.314914646s for fixHost
	I1216 07:03:50.531388 1687487 start.go:83] releasing machines lock for "ha-614518", held for 5.315142989s
	I1216 07:03:50.531501 1687487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614518
	I1216 07:03:50.548584 1687487 ssh_runner.go:195] Run: cat /version.json
	I1216 07:03:50.548651 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518
	I1216 07:03:50.548722 1687487 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 07:03:50.548786 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518
	I1216 07:03:50.573896 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34310 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518/id_rsa Username:docker}
	I1216 07:03:50.582211 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34310 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518/id_rsa Username:docker}
	I1216 07:03:50.773920 1687487 ssh_runner.go:195] Run: systemctl --version
	I1216 07:03:50.780399 1687487 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 07:03:50.815666 1687487 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 07:03:50.820120 1687487 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 07:03:50.820193 1687487 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 07:03:50.828039 1687487 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 07:03:50.828121 1687487 start.go:496] detecting cgroup driver to use...
	I1216 07:03:50.828169 1687487 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 07:03:50.828249 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 07:03:50.844121 1687487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 07:03:50.857243 1687487 docker.go:218] disabling cri-docker service (if available) ...
	I1216 07:03:50.857381 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 07:03:50.873095 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 07:03:50.886187 1687487 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 07:03:51.006275 1687487 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 07:03:51.140914 1687487 docker.go:234] disabling docker service ...
	I1216 07:03:51.140991 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 07:03:51.157238 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 07:03:51.171898 1687487 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 07:03:51.287675 1687487 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 07:03:51.421310 1687487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 07:03:51.434905 1687487 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 07:03:51.449226 1687487 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 07:03:51.449297 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:03:51.458120 1687487 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 07:03:51.458190 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:03:51.467336 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:03:51.476031 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:03:51.484943 1687487 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 07:03:51.493309 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:03:51.502592 1687487 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:03:51.511462 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:03:51.520904 1687487 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 07:03:51.528691 1687487 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 07:03:51.536073 1687487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 07:03:51.644582 1687487 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 07:03:51.813587 1687487 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 07:03:51.813682 1687487 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 07:03:51.818257 1687487 start.go:564] Will wait 60s for crictl version
	I1216 07:03:51.818378 1687487 ssh_runner.go:195] Run: which crictl
	I1216 07:03:51.822136 1687487 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 07:03:51.848811 1687487 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 07:03:51.848971 1687487 ssh_runner.go:195] Run: crio --version
	I1216 07:03:51.877270 1687487 ssh_runner.go:195] Run: crio --version
	I1216 07:03:51.911920 1687487 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 07:03:51.914805 1687487 cli_runner.go:164] Run: docker network inspect ha-614518 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 07:03:51.931261 1687487 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 07:03:51.935082 1687487 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 07:03:51.945205 1687487 kubeadm.go:884] updating cluster {Name:ha-614518 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-614518 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 07:03:51.945357 1687487 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 07:03:51.945422 1687487 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 07:03:51.979077 1687487 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 07:03:51.979106 1687487 crio.go:433] Images already preloaded, skipping extraction
	I1216 07:03:51.979163 1687487 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 07:03:52.008543 1687487 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 07:03:52.008569 1687487 cache_images.go:86] Images are preloaded, skipping loading
	I1216 07:03:52.008578 1687487 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1216 07:03:52.008687 1687487 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-614518 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-614518 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 07:03:52.008783 1687487 ssh_runner.go:195] Run: crio config
	I1216 07:03:52.064647 1687487 cni.go:84] Creating CNI manager for ""
	I1216 07:03:52.064671 1687487 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1216 07:03:52.064694 1687487 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 07:03:52.064717 1687487 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-614518 NodeName:ha-614518 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 07:03:52.064852 1687487 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-614518"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 07:03:52.064876 1687487 kube-vip.go:115] generating kube-vip config ...
	I1216 07:03:52.064936 1687487 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1216 07:03:52.077257 1687487 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1216 07:03:52.077367 1687487 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1216 07:03:52.077440 1687487 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 07:03:52.085615 1687487 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 07:03:52.085717 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1216 07:03:52.093632 1687487 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1216 07:03:52.107221 1687487 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 07:03:52.120189 1687487 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1216 07:03:52.132971 1687487 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1216 07:03:52.145766 1687487 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1216 07:03:52.149312 1687487 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 07:03:52.158923 1687487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 07:03:52.283710 1687487 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 07:03:52.301582 1687487 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518 for IP: 192.168.49.2
	I1216 07:03:52.301603 1687487 certs.go:195] generating shared ca certs ...
	I1216 07:03:52.301620 1687487 certs.go:227] acquiring lock for ca certs: {Name:mkbf72d2e438185e2867d262e148d82e5455cccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:03:52.301773 1687487 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key
	I1216 07:03:52.301822 1687487 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key
	I1216 07:03:52.301833 1687487 certs.go:257] generating profile certs ...
	I1216 07:03:52.301907 1687487 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/client.key
	I1216 07:03:52.301945 1687487 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.key.d39b37a1
	I1216 07:03:52.301963 1687487 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.crt.d39b37a1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1216 07:03:52.415504 1687487 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.crt.d39b37a1 ...
	I1216 07:03:52.415537 1687487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.crt.d39b37a1: {Name:mk670a19d587f16baf0df889e9e917056f8f5261 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:03:52.415731 1687487 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.key.d39b37a1 ...
	I1216 07:03:52.415747 1687487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.key.d39b37a1: {Name:mk54bea57dae6ed1500bec8bfd5028c4fbd13a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:03:52.415839 1687487 certs.go:382] copying /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.crt.d39b37a1 -> /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.crt
	I1216 07:03:52.415977 1687487 certs.go:386] copying /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.key.d39b37a1 -> /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.key
	I1216 07:03:52.416116 1687487 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/proxy-client.key
	I1216 07:03:52.416135 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1216 07:03:52.416152 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1216 07:03:52.416168 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1216 07:03:52.416186 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1216 07:03:52.416197 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1216 07:03:52.416215 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1216 07:03:52.416235 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1216 07:03:52.416253 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1216 07:03:52.416304 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem (1338 bytes)
	W1216 07:03:52.416340 1687487 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255_empty.pem, impossibly tiny 0 bytes
	I1216 07:03:52.416355 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 07:03:52.416384 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem (1078 bytes)
	I1216 07:03:52.416413 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem (1123 bytes)
	I1216 07:03:52.416440 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem (1675 bytes)
	I1216 07:03:52.416515 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 07:03:52.416550 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:03:52.416569 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem -> /usr/share/ca-certificates/1599255.pem
	I1216 07:03:52.416583 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> /usr/share/ca-certificates/15992552.pem
	I1216 07:03:52.417145 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 07:03:52.438246 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 07:03:52.458550 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 07:03:52.483806 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 07:03:52.504536 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1216 07:03:52.531165 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 07:03:52.551893 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 07:03:52.571589 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 07:03:52.590649 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 07:03:52.610138 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem --> /usr/share/ca-certificates/1599255.pem (1338 bytes)
	I1216 07:03:52.630965 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /usr/share/ca-certificates/15992552.pem (1708 bytes)
	I1216 07:03:52.650790 1687487 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 07:03:52.664186 1687487 ssh_runner.go:195] Run: openssl version
	I1216 07:03:52.671337 1687487 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:03:52.678844 1687487 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 07:03:52.686401 1687487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:03:52.690368 1687487 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:03:52.690436 1687487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:03:52.731470 1687487 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 07:03:52.738706 1687487 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1599255.pem
	I1216 07:03:52.745967 1687487 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1599255.pem /etc/ssl/certs/1599255.pem
	I1216 07:03:52.753284 1687487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1599255.pem
	I1216 07:03:52.757015 1687487 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 06:24 /usr/share/ca-certificates/1599255.pem
	I1216 07:03:52.757119 1687487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1599255.pem
	I1216 07:03:52.798254 1687487 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 07:03:52.805456 1687487 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15992552.pem
	I1216 07:03:52.812464 1687487 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15992552.pem /etc/ssl/certs/15992552.pem
	I1216 07:03:52.820202 1687487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15992552.pem
	I1216 07:03:52.823851 1687487 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 06:24 /usr/share/ca-certificates/15992552.pem
	I1216 07:03:52.823958 1687487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15992552.pem
	I1216 07:03:52.864891 1687487 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 07:03:52.872666 1687487 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 07:03:52.876565 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 07:03:52.917593 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 07:03:52.962371 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 07:03:53.011634 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 07:03:53.070012 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 07:03:53.127584 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 07:03:53.215856 1687487 kubeadm.go:401] StartCluster: {Name:ha-614518 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-614518 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 07:03:53.216035 1687487 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 07:03:53.216134 1687487 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 07:03:53.263680 1687487 cri.go:89] found id: "11e4b44d62d5436a07f6d8edd733f4092c09af04d3fa6130a9ee2d504c2d7b92"
	I1216 07:03:53.263744 1687487 cri.go:89] found id: "69514719ce90eebffbe68b0ace74e14259ceea7c07980c6918b6af6e8b91ba10"
	I1216 07:03:53.263764 1687487 cri.go:89] found id: "b6e4d702970e634028ab9da9ca8e258d02bb0aa908a74a428d72bd35cdec320d"
	I1216 07:03:53.263787 1687487 cri.go:89] found id: "c0e9d15ebb1cd884461c491d76b9c135253b28403f1a18a97c1bdb68443fe858"
	I1216 07:03:53.263822 1687487 cri.go:89] found id: "db591d0d437f81b8c65552b6efbd2ca8fb29bb1e0989d62b2cce8be69b46105c"
	I1216 07:03:53.263846 1687487 cri.go:89] found id: ""
	I1216 07:03:53.263924 1687487 ssh_runner.go:195] Run: sudo runc list -f json
	W1216 07:03:53.279629 1687487 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T07:03:53Z" level=error msg="open /run/runc: no such file or directory"
	I1216 07:03:53.279752 1687487 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 07:03:53.291564 1687487 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 07:03:53.291626 1687487 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 07:03:53.291717 1687487 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 07:03:53.306008 1687487 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 07:03:53.306492 1687487 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-614518" does not appear in /home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 07:03:53.306648 1687487 kubeconfig.go:62] /home/jenkins/minikube-integration/22141-1596013/kubeconfig needs updating (will repair): [kubeconfig missing "ha-614518" cluster setting kubeconfig missing "ha-614518" context setting]
	I1216 07:03:53.306941 1687487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/kubeconfig: {Name:mk61a8e87d869d27c5acc78145bae6b02a8088a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:03:53.307502 1687487 kapi.go:59] client config for ha-614518: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/client.crt", KeyFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/client.key", CAFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 07:03:53.308322 1687487 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1216 07:03:53.308427 1687487 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 07:03:53.308488 1687487 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1216 07:03:53.308515 1687487 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1216 07:03:53.308406 1687487 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1216 07:03:53.308623 1687487 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 07:03:53.308936 1687487 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 07:03:53.317737 1687487 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1216 07:03:53.317797 1687487 kubeadm.go:602] duration metric: took 26.14434ms to restartPrimaryControlPlane
	I1216 07:03:53.317823 1687487 kubeadm.go:403] duration metric: took 101.97493ms to StartCluster
	I1216 07:03:53.317854 1687487 settings.go:142] acquiring lock: {Name:mk011eec7aa10b3db81dce3dc7edf51f985e2ce2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:03:53.317948 1687487 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 07:03:53.318556 1687487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/kubeconfig: {Name:mk61a8e87d869d27c5acc78145bae6b02a8088a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:03:53.318810 1687487 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 07:03:53.318859 1687487 start.go:242] waiting for startup goroutines ...
	I1216 07:03:53.318894 1687487 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 07:03:53.319377 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:03:53.323257 1687487 out.go:179] * Enabled addons: 
	I1216 07:03:53.326246 1687487 addons.go:530] duration metric: took 7.35197ms for enable addons: enabled=[]
	I1216 07:03:53.326324 1687487 start.go:247] waiting for cluster config update ...
	I1216 07:03:53.326358 1687487 start.go:256] writing updated cluster config ...
	I1216 07:03:53.329613 1687487 out.go:203] 
	I1216 07:03:53.332888 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:03:53.333052 1687487 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/config.json ...
	I1216 07:03:53.336576 1687487 out.go:179] * Starting "ha-614518-m02" control-plane node in "ha-614518" cluster
	I1216 07:03:53.339553 1687487 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 07:03:53.342482 1687487 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 07:03:53.345454 1687487 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 07:03:53.345546 1687487 cache.go:65] Caching tarball of preloaded images
	I1216 07:03:53.345514 1687487 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 07:03:53.345877 1687487 preload.go:238] Found /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 07:03:53.345913 1687487 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 07:03:53.346063 1687487 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/config.json ...
	I1216 07:03:53.363377 1687487 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 07:03:53.363397 1687487 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 07:03:53.363414 1687487 cache.go:243] Successfully downloaded all kic artifacts
	I1216 07:03:53.363438 1687487 start.go:360] acquireMachinesLock for ha-614518-m02: {Name:mka615bda267fcf7df6d6dfdc68cac769a75315d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 07:03:53.363497 1687487 start.go:364] duration metric: took 36.119µs to acquireMachinesLock for "ha-614518-m02"
	I1216 07:03:53.363523 1687487 start.go:96] Skipping create...Using existing machine configuration
	I1216 07:03:53.363534 1687487 fix.go:54] fixHost starting: m02
	I1216 07:03:53.363791 1687487 cli_runner.go:164] Run: docker container inspect ha-614518-m02 --format={{.State.Status}}
	I1216 07:03:53.383383 1687487 fix.go:112] recreateIfNeeded on ha-614518-m02: state=Stopped err=<nil>
	W1216 07:03:53.383415 1687487 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 07:03:53.386537 1687487 out.go:252] * Restarting existing docker container for "ha-614518-m02" ...
	I1216 07:03:53.386636 1687487 cli_runner.go:164] Run: docker start ha-614518-m02
	I1216 07:03:53.794943 1687487 cli_runner.go:164] Run: docker container inspect ha-614518-m02 --format={{.State.Status}}
	I1216 07:03:53.822138 1687487 kic.go:430] container "ha-614518-m02" state is running.
	I1216 07:03:53.822535 1687487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614518-m02
	I1216 07:03:53.851090 1687487 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/config.json ...
	I1216 07:03:53.851356 1687487 machine.go:94] provisionDockerMachine start ...
	I1216 07:03:53.851426 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m02
	I1216 07:03:53.878317 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:03:53.878677 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34315 <nil> <nil>}
	I1216 07:03:53.878696 1687487 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 07:03:53.879342 1687487 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1216 07:03:57.124004 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-614518-m02
	
	I1216 07:03:57.124068 1687487 ubuntu.go:182] provisioning hostname "ha-614518-m02"
	I1216 07:03:57.124164 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m02
	I1216 07:03:57.173735 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:03:57.174061 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34315 <nil> <nil>}
	I1216 07:03:57.174078 1687487 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-614518-m02 && echo "ha-614518-m02" | sudo tee /etc/hostname
	I1216 07:03:57.438628 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-614518-m02
	
	I1216 07:03:57.438749 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m02
	I1216 07:03:57.472722 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:03:57.473050 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34315 <nil> <nil>}
	I1216 07:03:57.473073 1687487 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-614518-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-614518-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-614518-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 07:03:57.677870 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 07:03:57.677921 1687487 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-1596013/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-1596013/.minikube}
	I1216 07:03:57.677946 1687487 ubuntu.go:190] setting up certificates
	I1216 07:03:57.677958 1687487 provision.go:84] configureAuth start
	I1216 07:03:57.678055 1687487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614518-m02
	I1216 07:03:57.722106 1687487 provision.go:143] copyHostCerts
	I1216 07:03:57.722151 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 07:03:57.722185 1687487 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem, removing ...
	I1216 07:03:57.722198 1687487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 07:03:57.722276 1687487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem (1078 bytes)
	I1216 07:03:57.722357 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 07:03:57.722379 1687487 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem, removing ...
	I1216 07:03:57.722388 1687487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 07:03:57.722421 1687487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem (1123 bytes)
	I1216 07:03:57.722465 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 07:03:57.722489 1687487 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem, removing ...
	I1216 07:03:57.722498 1687487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 07:03:57.722529 1687487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem (1675 bytes)
	I1216 07:03:57.722633 1687487 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem org=jenkins.ha-614518-m02 san=[127.0.0.1 192.168.49.3 ha-614518-m02 localhost minikube]
	I1216 07:03:57.844425 1687487 provision.go:177] copyRemoteCerts
	I1216 07:03:57.844504 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 07:03:57.844548 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m02
	I1216 07:03:57.862917 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34315 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518-m02/id_rsa Username:docker}
	I1216 07:03:57.972376 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1216 07:03:57.972445 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 07:03:58.017243 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1216 07:03:58.017311 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1216 07:03:58.059767 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1216 07:03:58.059828 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 07:03:58.113177 1687487 provision.go:87] duration metric: took 435.20178ms to configureAuth
	I1216 07:03:58.113246 1687487 ubuntu.go:206] setting minikube options for container-runtime
	I1216 07:03:58.113513 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:03:58.113663 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m02
	I1216 07:03:58.142721 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:03:58.143019 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34315 <nil> <nil>}
	I1216 07:03:58.143032 1687487 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 07:03:59.702077 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 07:03:59.702157 1687487 machine.go:97] duration metric: took 5.850782021s to provisionDockerMachine
	I1216 07:03:59.702183 1687487 start.go:293] postStartSetup for "ha-614518-m02" (driver="docker")
	I1216 07:03:59.702253 1687487 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 07:03:59.702337 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 07:03:59.702409 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m02
	I1216 07:03:59.738247 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34315 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518-m02/id_rsa Username:docker}
	I1216 07:03:59.855085 1687487 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 07:03:59.858756 1687487 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 07:03:59.858785 1687487 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 07:03:59.858797 1687487 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/addons for local assets ...
	I1216 07:03:59.858854 1687487 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/files for local assets ...
	I1216 07:03:59.858930 1687487 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> 15992552.pem in /etc/ssl/certs
	I1216 07:03:59.858937 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> /etc/ssl/certs/15992552.pem
	I1216 07:03:59.859038 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 07:03:59.868409 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 07:03:59.890719 1687487 start.go:296] duration metric: took 188.504339ms for postStartSetup
	I1216 07:03:59.890855 1687487 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 07:03:59.890922 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m02
	I1216 07:03:59.909691 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34315 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518-m02/id_rsa Username:docker}
	I1216 07:04:00.010830 1687487 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 07:04:00.053896 1687487 fix.go:56] duration metric: took 6.690353109s for fixHost
	I1216 07:04:00.053984 1687487 start.go:83] releasing machines lock for "ha-614518-m02", held for 6.690472315s
	I1216 07:04:00.054132 1687487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614518-m02
	I1216 07:04:00.100321 1687487 out.go:179] * Found network options:
	I1216 07:04:00.105391 1687487 out.go:179]   - NO_PROXY=192.168.49.2
	W1216 07:04:00.108450 1687487 proxy.go:120] fail to check proxy env: Error ip not in block
	W1216 07:04:00.108636 1687487 proxy.go:120] fail to check proxy env: Error ip not in block
	I1216 07:04:00.108742 1687487 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 07:04:00.108814 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m02
	I1216 07:04:00.109177 1687487 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 07:04:00.115341 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m02
	I1216 07:04:00.165700 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34315 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518-m02/id_rsa Username:docker}
	I1216 07:04:00.232046 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34315 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518-m02/id_rsa Username:docker}
	I1216 07:04:00.645936 1687487 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 07:04:00.658871 1687487 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 07:04:00.658994 1687487 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 07:04:00.687970 1687487 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 07:04:00.688053 1687487 start.go:496] detecting cgroup driver to use...
	I1216 07:04:00.688101 1687487 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 07:04:00.688186 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 07:04:00.715577 1687487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 07:04:00.751617 1687487 docker.go:218] disabling cri-docker service (if available) ...
	I1216 07:04:00.751681 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 07:04:00.778303 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 07:04:00.802164 1687487 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 07:04:01.047882 1687487 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 07:04:01.301807 1687487 docker.go:234] disabling docker service ...
	I1216 07:04:01.301880 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 07:04:01.322236 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 07:04:01.348117 1687487 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 07:04:01.593311 1687487 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 07:04:01.834030 1687487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 07:04:01.858526 1687487 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 07:04:01.886506 1687487 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 07:04:01.886622 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:04:01.922317 1687487 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 07:04:01.922463 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:04:01.953232 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:04:01.971302 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:04:01.993804 1687487 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 07:04:02.013934 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:04:02.031424 1687487 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:04:02.046246 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:04:02.066027 1687487 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 07:04:02.080394 1687487 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 07:04:02.095283 1687487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 07:04:02.419550 1687487 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 07:05:32.857802 1687487 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.438149921s)
	I1216 07:05:32.857827 1687487 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 07:05:32.857897 1687487 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 07:05:32.861796 1687487 start.go:564] Will wait 60s for crictl version
	I1216 07:05:32.861879 1687487 ssh_runner.go:195] Run: which crictl
	I1216 07:05:32.865559 1687487 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 07:05:32.893251 1687487 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 07:05:32.893334 1687487 ssh_runner.go:195] Run: crio --version
	I1216 07:05:32.921229 1687487 ssh_runner.go:195] Run: crio --version
	I1216 07:05:32.960111 1687487 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 07:05:32.963074 1687487 out.go:179]   - env NO_PROXY=192.168.49.2
	I1216 07:05:32.965965 1687487 cli_runner.go:164] Run: docker network inspect ha-614518 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 07:05:32.983713 1687487 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 07:05:32.988187 1687487 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 07:05:32.998448 1687487 mustload.go:66] Loading cluster: ha-614518
	I1216 07:05:32.998787 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:05:32.999107 1687487 cli_runner.go:164] Run: docker container inspect ha-614518 --format={{.State.Status}}
	I1216 07:05:33.020295 1687487 host.go:66] Checking if "ha-614518" exists ...
	I1216 07:05:33.020623 1687487 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518 for IP: 192.168.49.3
	I1216 07:05:33.020635 1687487 certs.go:195] generating shared ca certs ...
	I1216 07:05:33.020650 1687487 certs.go:227] acquiring lock for ca certs: {Name:mkbf72d2e438185e2867d262e148d82e5455cccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:05:33.020784 1687487 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key
	I1216 07:05:33.020838 1687487 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key
	I1216 07:05:33.020847 1687487 certs.go:257] generating profile certs ...
	I1216 07:05:33.020922 1687487 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/client.key
	I1216 07:05:33.020982 1687487 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.key.10d34f0f
	I1216 07:05:33.021018 1687487 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/proxy-client.key
	I1216 07:05:33.021037 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1216 07:05:33.021050 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1216 07:05:33.021075 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1216 07:05:33.021088 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1216 07:05:33.021102 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1216 07:05:33.021114 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1216 07:05:33.021125 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1216 07:05:33.021135 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1216 07:05:33.021191 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem (1338 bytes)
	W1216 07:05:33.021222 1687487 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255_empty.pem, impossibly tiny 0 bytes
	I1216 07:05:33.021230 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 07:05:33.021255 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem (1078 bytes)
	I1216 07:05:33.021279 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem (1123 bytes)
	I1216 07:05:33.021303 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem (1675 bytes)
	I1216 07:05:33.021363 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 07:05:33.021393 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:05:33.021405 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem -> /usr/share/ca-certificates/1599255.pem
	I1216 07:05:33.021415 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> /usr/share/ca-certificates/15992552.pem
	I1216 07:05:33.021480 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518
	I1216 07:05:33.040303 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34310 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518/id_rsa Username:docker}
	I1216 07:05:33.132825 1687487 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1216 07:05:33.136811 1687487 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1216 07:05:33.145267 1687487 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1216 07:05:33.148926 1687487 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1216 07:05:33.157749 1687487 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1216 07:05:33.161324 1687487 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1216 07:05:33.170007 1687487 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1216 07:05:33.174232 1687487 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1216 07:05:33.182495 1687487 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1216 07:05:33.186607 1687487 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1216 07:05:33.194939 1687487 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1216 07:05:33.198815 1687487 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1216 07:05:33.207734 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 07:05:33.226981 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 07:05:33.246475 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 07:05:33.265061 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 07:05:33.284210 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1216 07:05:33.306195 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 07:05:33.324956 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 07:05:33.343476 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 07:05:33.361548 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 07:05:33.380428 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem --> /usr/share/ca-certificates/1599255.pem (1338 bytes)
	I1216 07:05:33.398886 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /usr/share/ca-certificates/15992552.pem (1708 bytes)
	I1216 07:05:33.416891 1687487 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1216 07:05:33.430017 1687487 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1216 07:05:33.442986 1687487 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1216 07:05:33.456178 1687487 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1216 07:05:33.469704 1687487 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1216 07:05:33.484299 1687487 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1216 07:05:33.499729 1687487 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1216 07:05:33.516041 1687487 ssh_runner.go:195] Run: openssl version
	I1216 07:05:33.524362 1687487 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1599255.pem
	I1216 07:05:33.532162 1687487 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1599255.pem /etc/ssl/certs/1599255.pem
	I1216 07:05:33.540324 1687487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1599255.pem
	I1216 07:05:33.544918 1687487 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 06:24 /usr/share/ca-certificates/1599255.pem
	I1216 07:05:33.544995 1687487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1599255.pem
	I1216 07:05:33.585992 1687487 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 07:05:33.593625 1687487 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15992552.pem
	I1216 07:05:33.601101 1687487 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15992552.pem /etc/ssl/certs/15992552.pem
	I1216 07:05:33.608445 1687487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15992552.pem
	I1216 07:05:33.613481 1687487 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 06:24 /usr/share/ca-certificates/15992552.pem
	I1216 07:05:33.613546 1687487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15992552.pem
	I1216 07:05:33.656579 1687487 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 07:05:33.664104 1687487 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:05:33.671624 1687487 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 07:05:33.679463 1687487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:05:33.683654 1687487 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:05:33.683720 1687487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:05:33.725052 1687487 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 07:05:33.733624 1687487 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 07:05:33.737572 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 07:05:33.781425 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 07:05:33.824276 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 07:05:33.865794 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 07:05:33.909050 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 07:05:33.951953 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 07:05:33.993867 1687487 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.2 crio true true} ...
	I1216 07:05:33.993976 1687487 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-614518-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-614518 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 07:05:33.994007 1687487 kube-vip.go:115] generating kube-vip config ...
	I1216 07:05:33.994059 1687487 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1216 07:05:34.009409 1687487 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1216 07:05:34.009486 1687487 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1216 07:05:34.009582 1687487 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 07:05:34.018576 1687487 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 07:05:34.018674 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1216 07:05:34.027410 1687487 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1216 07:05:34.042363 1687487 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 07:05:34.056182 1687487 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1216 07:05:34.074014 1687487 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1216 07:05:34.077990 1687487 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 07:05:34.088295 1687487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 07:05:34.232095 1687487 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 07:05:34.247231 1687487 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 07:05:34.247603 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:05:34.253170 1687487 out.go:179] * Verifying Kubernetes components...
	I1216 07:05:34.255848 1687487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 07:05:34.381731 1687487 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 07:05:34.396551 1687487 kapi.go:59] client config for ha-614518: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/client.crt", KeyFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/client.key", CAFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1216 07:05:34.396622 1687487 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1216 07:05:34.397115 1687487 node_ready.go:35] waiting up to 6m0s for node "ha-614518-m02" to be "Ready" ...
	I1216 07:05:37.040586 1687487 node_ready.go:49] node "ha-614518-m02" is "Ready"
	I1216 07:05:37.040621 1687487 node_ready.go:38] duration metric: took 2.643481502s for node "ha-614518-m02" to be "Ready" ...
	I1216 07:05:37.040635 1687487 api_server.go:52] waiting for apiserver process to appear ...
	I1216 07:05:37.040695 1687487 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:05:37.061374 1687487 api_server.go:72] duration metric: took 2.814094s to wait for apiserver process to appear ...
	I1216 07:05:37.061401 1687487 api_server.go:88] waiting for apiserver healthz status ...
	I1216 07:05:37.061420 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:37.074087 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:37.074124 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:37.561699 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:37.575722 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:37.575749 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:38.062105 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:38.073942 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:38.073979 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:38.561534 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:38.571539 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:38.571575 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:39.062243 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:39.070626 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:39.070656 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:39.562250 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:39.570668 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:39.570709 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:40.062490 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:40.071222 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:40.071258 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:40.561835 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:40.570234 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:40.570267 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:41.062517 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:41.070865 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:41.070907 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:41.562123 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:41.570314 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:41.570354 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:42.061560 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:42.070019 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:42.070066 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:42.561525 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:42.575709 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:42.575741 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:43.062386 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:43.072157 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:43.072235 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:43.561622 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:43.569766 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:43.569792 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:44.062378 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:44.073021 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:44.073060 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:44.562264 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:44.570578 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:44.570610 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:45.063004 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:45.074685 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:45.074724 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:45.562091 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:45.570321 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:45.570358 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:46.062073 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:46.070931 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:46.070966 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:46.561565 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:46.569995 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:46.570026 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:47.061616 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:47.072095 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:47.072131 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:47.561577 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:47.570812 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:47.570839 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:48.062047 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:48.070373 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:48.070403 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:48.562094 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:48.570453 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:48.570491 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:49.062122 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:49.070449 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:49.070490 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:49.561963 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:49.570228 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:49.570254 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:50.061859 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:50.070692 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:50.070727 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:50.562001 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:50.570230 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:50.570256 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:51.061757 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:51.070029 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:51.070062 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:51.561541 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:51.570443 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:51.570470 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:52.061863 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:52.070098 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:52.070127 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:52.561554 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:52.571992 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:52.572023 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:53.061596 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:53.069723 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:53.069756 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:53.562103 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:53.570175 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:53.570210 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:54.061674 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:54.069916 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:54.069946 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:54.561543 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:54.569758 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:54.569785 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:55.062452 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:55.071750 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:55.071778 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:55.562411 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:55.572141 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:55.572172 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:56.061606 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:56.070095 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:56.070177 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:56.561548 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:56.569665 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:56.569692 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:57.061801 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:57.069953 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:57.069981 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:57.561491 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:57.569864 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:57.569901 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:58.062468 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:58.070718 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:58.070747 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:58.562420 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:58.584824 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:58.584854 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:59.062385 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:59.070501 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:59.070541 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:59.561854 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:59.569961 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:59.569992 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:00.061869 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:00.114940 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:06:00.115034 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:00.561553 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:00.570378 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:06:00.570407 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:01.062023 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:01.070600 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:06:01.070633 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:01.562296 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:01.570659 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:06:01.570688 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:02.062180 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:02.070681 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:06:02.070728 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:02.562216 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:02.570655 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:06:02.570684 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:03.062338 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:03.071577 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:06:03.071605 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:03.562262 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:03.570378 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:06:03.570415 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:04.061866 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:04.070630 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:06:04.070665 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:04.562372 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:04.573063 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:06:04.573103 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:05.061594 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:05.070425 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1216 07:06:05.071905 1687487 api_server.go:141] control plane version: v1.34.2
	I1216 07:06:05.071945 1687487 api_server.go:131] duration metric: took 28.010531893s to wait for apiserver health ...
	I1216 07:06:05.071959 1687487 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 07:06:05.081048 1687487 system_pods.go:59] 26 kube-system pods found
	I1216 07:06:05.081158 1687487 system_pods.go:61] "coredns-66bc5c9577-j2dlk" [7cdee874-13b2-4689-accf-e066854554a5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 07:06:05.081176 1687487 system_pods.go:61] "coredns-66bc5c9577-wnl5v" [9256d5c3-7034-467c-8cd0-d6f4987701c7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 07:06:05.081183 1687487 system_pods.go:61] "etcd-ha-614518" [dec5e097-b96b-40dd-a2f9-a9182668648e] Running
	I1216 07:06:05.081188 1687487 system_pods.go:61] "etcd-ha-614518-m02" [5998a7f5-5092-4768-b87a-c510c308efda] Running
	I1216 07:06:05.081192 1687487 system_pods.go:61] "etcd-ha-614518-m03" [d0a65bae-d842-4e55-85d9-ae1d6429088c] Running
	I1216 07:06:05.081197 1687487 system_pods.go:61] "kindnet-4gbf2" [b5285121-5662-466c-929f-6fe0e623e252] Running
	I1216 07:06:05.081201 1687487 system_pods.go:61] "kindnet-kwm49" [3a07c975-5ae6-434e-a9da-c68833c8a6dc] Running
	I1216 07:06:05.081204 1687487 system_pods.go:61] "kindnet-qpdxp" [44975bb5-380a-4313-99bd-df7510492688] Running
	I1216 07:06:05.081208 1687487 system_pods.go:61] "kindnet-t2849" [14c37491-38c8-4d32-89e2-d5065c21a976] Running
	I1216 07:06:05.081223 1687487 system_pods.go:61] "kube-apiserver-ha-614518" [51b10c5f-bf67-430b-85d7-ba31c2602e9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 07:06:05.081228 1687487 system_pods.go:61] "kube-apiserver-ha-614518-m02" [b25aee21-ddf1-4fc7-87e2-92a70d851d7a] Running
	I1216 07:06:05.081233 1687487 system_pods.go:61] "kube-apiserver-ha-614518-m03" [79a42481-9723-4f77-aec4-5d5727a98c63] Running
	I1216 07:06:05.081244 1687487 system_pods.go:61] "kube-controller-manager-ha-614518" [42894aa1-df0a-43d9-9a93-5b6141db631c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 07:06:05.081249 1687487 system_pods.go:61] "kube-controller-manager-ha-614518-m02" [984e14b1-d933-4792-b225-65a0fce5c8ac] Running
	I1216 07:06:05.081262 1687487 system_pods.go:61] "kube-controller-manager-ha-614518-m03" [b455cb3c-7c98-4ec2-9ce0-36e5c2f3b8cf] Running
	I1216 07:06:05.081266 1687487 system_pods.go:61] "kube-proxy-4kdt5" [45eb7aa5-bb99-4da3-883f-cdd380715c71] Running
	I1216 07:06:05.081270 1687487 system_pods.go:61] "kube-proxy-bmxpt" [573f4950-4197-4e95-90e8-93a2ec8bd016] Running
	I1216 07:06:05.081276 1687487 system_pods.go:61] "kube-proxy-fhwcs" [f6d4a561-d45e-4149-b00a-9fc8ef22017f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 07:06:05.081291 1687487 system_pods.go:61] "kube-proxy-qqr57" [bfce576a-7733-4a72-acf8-33d64dd3287a] Running
	I1216 07:06:05.081296 1687487 system_pods.go:61] "kube-scheduler-ha-614518" [ce73c116-9a87-4180-add6-fb07eb04c9a0] Running
	I1216 07:06:05.081301 1687487 system_pods.go:61] "kube-scheduler-ha-614518-m02" [249b5f83-63be-4691-87b1-5e25e13865ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 07:06:05.081305 1687487 system_pods.go:61] "kube-scheduler-ha-614518-m03" [db57c26e-9813-4b2b-b70b-0a07ed119aaa] Running
	I1216 07:06:05.081309 1687487 system_pods.go:61] "kube-vip-ha-614518" [e7bcfc9a-42b0-4066-9bb1-4abf917e98b9] Running
	I1216 07:06:05.081313 1687487 system_pods.go:61] "kube-vip-ha-614518-m02" [e662027d-d25a-4273-bdb7-9e21f666839e] Running
	I1216 07:06:05.081317 1687487 system_pods.go:61] "kube-vip-ha-614518-m03" [edab6af2-c513-479d-a2c8-c474380ca5d9] Running
	I1216 07:06:05.081323 1687487 system_pods.go:61] "storage-provisioner" [c8b9c00b-10bc-423c-b16e-3f3cdb12e907] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 07:06:05.081329 1687487 system_pods.go:74] duration metric: took 9.364099ms to wait for pod list to return data ...
	I1216 07:06:05.081337 1687487 default_sa.go:34] waiting for default service account to be created ...
	I1216 07:06:05.084727 1687487 default_sa.go:45] found service account: "default"
	I1216 07:06:05.084759 1687487 default_sa.go:55] duration metric: took 3.415392ms for default service account to be created ...
	I1216 07:06:05.084770 1687487 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 07:06:05.092252 1687487 system_pods.go:86] 26 kube-system pods found
	I1216 07:06:05.092293 1687487 system_pods.go:89] "coredns-66bc5c9577-j2dlk" [7cdee874-13b2-4689-accf-e066854554a5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 07:06:05.092305 1687487 system_pods.go:89] "coredns-66bc5c9577-wnl5v" [9256d5c3-7034-467c-8cd0-d6f4987701c7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 07:06:05.092311 1687487 system_pods.go:89] "etcd-ha-614518" [dec5e097-b96b-40dd-a2f9-a9182668648e] Running
	I1216 07:06:05.092318 1687487 system_pods.go:89] "etcd-ha-614518-m02" [5998a7f5-5092-4768-b87a-c510c308efda] Running
	I1216 07:06:05.092322 1687487 system_pods.go:89] "etcd-ha-614518-m03" [d0a65bae-d842-4e55-85d9-ae1d6429088c] Running
	I1216 07:06:05.092327 1687487 system_pods.go:89] "kindnet-4gbf2" [b5285121-5662-466c-929f-6fe0e623e252] Running
	I1216 07:06:05.092331 1687487 system_pods.go:89] "kindnet-kwm49" [3a07c975-5ae6-434e-a9da-c68833c8a6dc] Running
	I1216 07:06:05.092336 1687487 system_pods.go:89] "kindnet-qpdxp" [44975bb5-380a-4313-99bd-df7510492688] Running
	I1216 07:06:05.092346 1687487 system_pods.go:89] "kindnet-t2849" [14c37491-38c8-4d32-89e2-d5065c21a976] Running
	I1216 07:06:05.092353 1687487 system_pods.go:89] "kube-apiserver-ha-614518" [51b10c5f-bf67-430b-85d7-ba31c2602e9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 07:06:05.092360 1687487 system_pods.go:89] "kube-apiserver-ha-614518-m02" [b25aee21-ddf1-4fc7-87e2-92a70d851d7a] Running
	I1216 07:06:05.092365 1687487 system_pods.go:89] "kube-apiserver-ha-614518-m03" [79a42481-9723-4f77-aec4-5d5727a98c63] Running
	I1216 07:06:05.092376 1687487 system_pods.go:89] "kube-controller-manager-ha-614518" [42894aa1-df0a-43d9-9a93-5b6141db631c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 07:06:05.092381 1687487 system_pods.go:89] "kube-controller-manager-ha-614518-m02" [984e14b1-d933-4792-b225-65a0fce5c8ac] Running
	I1216 07:06:05.092388 1687487 system_pods.go:89] "kube-controller-manager-ha-614518-m03" [b455cb3c-7c98-4ec2-9ce0-36e5c2f3b8cf] Running
	I1216 07:06:05.092392 1687487 system_pods.go:89] "kube-proxy-4kdt5" [45eb7aa5-bb99-4da3-883f-cdd380715c71] Running
	I1216 07:06:05.092399 1687487 system_pods.go:89] "kube-proxy-bmxpt" [573f4950-4197-4e95-90e8-93a2ec8bd016] Running
	I1216 07:06:05.092411 1687487 system_pods.go:89] "kube-proxy-fhwcs" [f6d4a561-d45e-4149-b00a-9fc8ef22017f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 07:06:05.092416 1687487 system_pods.go:89] "kube-proxy-qqr57" [bfce576a-7733-4a72-acf8-33d64dd3287a] Running
	I1216 07:06:05.092421 1687487 system_pods.go:89] "kube-scheduler-ha-614518" [ce73c116-9a87-4180-add6-fb07eb04c9a0] Running
	I1216 07:06:05.092426 1687487 system_pods.go:89] "kube-scheduler-ha-614518-m02" [249b5f83-63be-4691-87b1-5e25e13865ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 07:06:05.092433 1687487 system_pods.go:89] "kube-scheduler-ha-614518-m03" [db57c26e-9813-4b2b-b70b-0a07ed119aaa] Running
	I1216 07:06:05.092438 1687487 system_pods.go:89] "kube-vip-ha-614518" [e7bcfc9a-42b0-4066-9bb1-4abf917e98b9] Running
	I1216 07:06:05.092445 1687487 system_pods.go:89] "kube-vip-ha-614518-m02" [e662027d-d25a-4273-bdb7-9e21f666839e] Running
	I1216 07:06:05.092449 1687487 system_pods.go:89] "kube-vip-ha-614518-m03" [edab6af2-c513-479d-a2c8-c474380ca5d9] Running
	I1216 07:06:05.092455 1687487 system_pods.go:89] "storage-provisioner" [c8b9c00b-10bc-423c-b16e-3f3cdb12e907] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 07:06:05.092495 1687487 system_pods.go:126] duration metric: took 7.68911ms to wait for k8s-apps to be running ...
	I1216 07:06:05.092507 1687487 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 07:06:05.092570 1687487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 07:06:05.107026 1687487 system_svc.go:56] duration metric: took 14.508711ms WaitForService to wait for kubelet
	I1216 07:06:05.107098 1687487 kubeadm.go:587] duration metric: took 30.859823393s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 07:06:05.107133 1687487 node_conditions.go:102] verifying NodePressure condition ...
	I1216 07:06:05.110974 1687487 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1216 07:06:05.111054 1687487 node_conditions.go:123] node cpu capacity is 2
	I1216 07:06:05.111086 1687487 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1216 07:06:05.111110 1687487 node_conditions.go:123] node cpu capacity is 2
	I1216 07:06:05.111145 1687487 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1216 07:06:05.111170 1687487 node_conditions.go:123] node cpu capacity is 2
	I1216 07:06:05.111190 1687487 node_conditions.go:105] duration metric: took 4.037891ms to run NodePressure ...
	I1216 07:06:05.111216 1687487 start.go:242] waiting for startup goroutines ...
	I1216 07:06:05.111269 1687487 start.go:256] writing updated cluster config ...
	I1216 07:06:05.116668 1687487 out.go:203] 
	I1216 07:06:05.120812 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:06:05.120934 1687487 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/config.json ...
	I1216 07:06:05.124552 1687487 out.go:179] * Starting "ha-614518-m04" worker node in "ha-614518" cluster
	I1216 07:06:05.128339 1687487 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 07:06:05.132036 1687487 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 07:06:05.135120 1687487 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 07:06:05.135153 1687487 cache.go:65] Caching tarball of preloaded images
	I1216 07:06:05.135238 1687487 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 07:06:05.135318 1687487 preload.go:238] Found /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 07:06:05.135332 1687487 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 07:06:05.135455 1687487 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/config.json ...
	I1216 07:06:05.157793 1687487 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 07:06:05.157815 1687487 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 07:06:05.157833 1687487 cache.go:243] Successfully downloaded all kic artifacts
	I1216 07:06:05.157859 1687487 start.go:360] acquireMachinesLock for ha-614518-m04: {Name:mk43a7770b67c048f75b229b4d32a0d7d442337b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 07:06:05.157933 1687487 start.go:364] duration metric: took 53.449µs to acquireMachinesLock for "ha-614518-m04"
	I1216 07:06:05.157958 1687487 start.go:96] Skipping create...Using existing machine configuration
	I1216 07:06:05.157970 1687487 fix.go:54] fixHost starting: m04
	I1216 07:06:05.158264 1687487 cli_runner.go:164] Run: docker container inspect ha-614518-m04 --format={{.State.Status}}
	I1216 07:06:05.178507 1687487 fix.go:112] recreateIfNeeded on ha-614518-m04: state=Stopped err=<nil>
	W1216 07:06:05.178535 1687487 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 07:06:05.182229 1687487 out.go:252] * Restarting existing docker container for "ha-614518-m04" ...
	I1216 07:06:05.182326 1687487 cli_runner.go:164] Run: docker start ha-614518-m04
	I1216 07:06:05.490568 1687487 cli_runner.go:164] Run: docker container inspect ha-614518-m04 --format={{.State.Status}}
	I1216 07:06:05.514214 1687487 kic.go:430] container "ha-614518-m04" state is running.
	I1216 07:06:05.514594 1687487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614518-m04
	I1216 07:06:05.536033 1687487 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/config.json ...
	I1216 07:06:05.536263 1687487 machine.go:94] provisionDockerMachine start ...
	I1216 07:06:05.536336 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m04
	I1216 07:06:05.566891 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:06:05.567347 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34320 <nil> <nil>}
	I1216 07:06:05.567367 1687487 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 07:06:05.568162 1687487 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1216 07:06:08.712253 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-614518-m04
	
	I1216 07:06:08.712286 1687487 ubuntu.go:182] provisioning hostname "ha-614518-m04"
	I1216 07:06:08.712350 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m04
	I1216 07:06:08.732562 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:06:08.732911 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34320 <nil> <nil>}
	I1216 07:06:08.732931 1687487 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-614518-m04 && echo "ha-614518-m04" | sudo tee /etc/hostname
	I1216 07:06:08.889442 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-614518-m04
	
	I1216 07:06:08.889531 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m04
	I1216 07:06:08.909382 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:06:08.909721 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34320 <nil> <nil>}
	I1216 07:06:08.909743 1687487 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-614518-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-614518-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-614518-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 07:06:09.077198 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 07:06:09.077226 1687487 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-1596013/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-1596013/.minikube}
	I1216 07:06:09.077243 1687487 ubuntu.go:190] setting up certificates
	I1216 07:06:09.077252 1687487 provision.go:84] configureAuth start
	I1216 07:06:09.077348 1687487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614518-m04
	I1216 07:06:09.099011 1687487 provision.go:143] copyHostCerts
	I1216 07:06:09.099061 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 07:06:09.099099 1687487 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem, removing ...
	I1216 07:06:09.099113 1687487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 07:06:09.099193 1687487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem (1675 bytes)
	I1216 07:06:09.099292 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 07:06:09.099317 1687487 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem, removing ...
	I1216 07:06:09.099324 1687487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 07:06:09.099359 1687487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem (1078 bytes)
	I1216 07:06:09.099417 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 07:06:09.099439 1687487 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem, removing ...
	I1216 07:06:09.099448 1687487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 07:06:09.099477 1687487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem (1123 bytes)
	I1216 07:06:09.099540 1687487 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem org=jenkins.ha-614518-m04 san=[127.0.0.1 192.168.49.5 ha-614518-m04 localhost minikube]
	I1216 07:06:09.342772 1687487 provision.go:177] copyRemoteCerts
	I1216 07:06:09.342883 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 07:06:09.342952 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m04
	I1216 07:06:09.362064 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34320 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518-m04/id_rsa Username:docker}
	I1216 07:06:09.461352 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1216 07:06:09.461413 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 07:06:09.488306 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1216 07:06:09.488377 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1216 07:06:09.511681 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1216 07:06:09.511745 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 07:06:09.532372 1687487 provision.go:87] duration metric: took 455.10562ms to configureAuth
	I1216 07:06:09.532402 1687487 ubuntu.go:206] setting minikube options for container-runtime
	I1216 07:06:09.532749 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:06:09.532862 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m04
	I1216 07:06:09.550583 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:06:09.550921 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34320 <nil> <nil>}
	I1216 07:06:09.550942 1687487 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 07:06:09.906062 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 07:06:09.906129 1687487 machine.go:97] duration metric: took 4.369846916s to provisionDockerMachine
	I1216 07:06:09.906156 1687487 start.go:293] postStartSetup for "ha-614518-m04" (driver="docker")
	I1216 07:06:09.906186 1687487 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 07:06:09.906302 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 07:06:09.906394 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m04
	I1216 07:06:09.928571 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34320 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518-m04/id_rsa Username:docker}
	I1216 07:06:10.043685 1687487 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 07:06:10.067794 1687487 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 07:06:10.067836 1687487 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 07:06:10.067850 1687487 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/addons for local assets ...
	I1216 07:06:10.067926 1687487 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/files for local assets ...
	I1216 07:06:10.068023 1687487 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> 15992552.pem in /etc/ssl/certs
	I1216 07:06:10.068034 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> /etc/ssl/certs/15992552.pem
	I1216 07:06:10.068175 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 07:06:10.080979 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 07:06:10.111023 1687487 start.go:296] duration metric: took 204.832511ms for postStartSetup
	I1216 07:06:10.111182 1687487 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 07:06:10.111258 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m04
	I1216 07:06:10.133434 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34320 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518-m04/id_rsa Username:docker}
	I1216 07:06:10.243926 1687487 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 07:06:10.252839 1687487 fix.go:56] duration metric: took 5.094861586s for fixHost
	I1216 07:06:10.252868 1687487 start.go:83] releasing machines lock for "ha-614518-m04", held for 5.094922297s
	I1216 07:06:10.252940 1687487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614518-m04
	I1216 07:06:10.273934 1687487 out.go:179] * Found network options:
	I1216 07:06:10.276892 1687487 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1216 07:06:10.279702 1687487 proxy.go:120] fail to check proxy env: Error ip not in block
	W1216 07:06:10.279739 1687487 proxy.go:120] fail to check proxy env: Error ip not in block
	W1216 07:06:10.279765 1687487 proxy.go:120] fail to check proxy env: Error ip not in block
	W1216 07:06:10.279776 1687487 proxy.go:120] fail to check proxy env: Error ip not in block
	I1216 07:06:10.279853 1687487 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 07:06:10.279897 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m04
	I1216 07:06:10.280186 1687487 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 07:06:10.280250 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m04
	I1216 07:06:10.304141 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34320 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518-m04/id_rsa Username:docker}
	I1216 07:06:10.316532 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34320 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518-m04/id_rsa Username:docker}
	I1216 07:06:10.464790 1687487 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 07:06:10.529284 1687487 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 07:06:10.529353 1687487 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 07:06:10.550769 1687487 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 07:06:10.550846 1687487 start.go:496] detecting cgroup driver to use...
	I1216 07:06:10.550924 1687487 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 07:06:10.551036 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 07:06:10.576598 1687487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 07:06:10.598097 1687487 docker.go:218] disabling cri-docker service (if available) ...
	I1216 07:06:10.598259 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 07:06:10.618172 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 07:06:10.634284 1687487 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 07:06:10.768085 1687487 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 07:06:10.900504 1687487 docker.go:234] disabling docker service ...
	I1216 07:06:10.900581 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 07:06:10.927152 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 07:06:10.942383 1687487 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 07:06:11.076847 1687487 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 07:06:11.223349 1687487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 07:06:11.239694 1687487 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 07:06:11.255054 1687487 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 07:06:11.255145 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:06:11.266034 1687487 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 07:06:11.266152 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:06:11.276524 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:06:11.286271 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:06:11.297358 1687487 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 07:06:11.307624 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:06:11.322735 1687487 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:06:11.331594 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:06:11.341363 1687487 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 07:06:11.355843 1687487 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 07:06:11.364696 1687487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 07:06:11.491229 1687487 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 07:06:11.671501 1687487 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 07:06:11.671633 1687487 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 07:06:11.675428 1687487 start.go:564] Will wait 60s for crictl version
	I1216 07:06:11.675526 1687487 ssh_runner.go:195] Run: which crictl
	I1216 07:06:11.679282 1687487 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 07:06:11.704854 1687487 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 07:06:11.704992 1687487 ssh_runner.go:195] Run: crio --version
	I1216 07:06:11.737456 1687487 ssh_runner.go:195] Run: crio --version
	I1216 07:06:11.775396 1687487 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 07:06:11.778421 1687487 out.go:179]   - env NO_PROXY=192.168.49.2
	I1216 07:06:11.781653 1687487 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1216 07:06:11.784682 1687487 cli_runner.go:164] Run: docker network inspect ha-614518 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 07:06:11.801080 1687487 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 07:06:11.805027 1687487 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 07:06:11.815307 1687487 mustload.go:66] Loading cluster: ha-614518
	I1216 07:06:11.815555 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:06:11.815814 1687487 cli_runner.go:164] Run: docker container inspect ha-614518 --format={{.State.Status}}
	I1216 07:06:11.835520 1687487 host.go:66] Checking if "ha-614518" exists ...
	I1216 07:06:11.835825 1687487 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518 for IP: 192.168.49.5
	I1216 07:06:11.835840 1687487 certs.go:195] generating shared ca certs ...
	I1216 07:06:11.835857 1687487 certs.go:227] acquiring lock for ca certs: {Name:mkbf72d2e438185e2867d262e148d82e5455cccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:06:11.835999 1687487 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key
	I1216 07:06:11.836046 1687487 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key
	I1216 07:06:11.836063 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1216 07:06:11.836076 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1216 07:06:11.836096 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1216 07:06:11.836113 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1216 07:06:11.836166 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem (1338 bytes)
	W1216 07:06:11.836212 1687487 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255_empty.pem, impossibly tiny 0 bytes
	I1216 07:06:11.836243 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 07:06:11.836281 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem (1078 bytes)
	I1216 07:06:11.836313 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem (1123 bytes)
	I1216 07:06:11.836348 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem (1675 bytes)
	I1216 07:06:11.836418 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 07:06:11.836451 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:06:11.836505 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem -> /usr/share/ca-certificates/1599255.pem
	I1216 07:06:11.836521 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> /usr/share/ca-certificates/15992552.pem
	I1216 07:06:11.836544 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 07:06:11.859722 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 07:06:11.879459 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 07:06:11.899359 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 07:06:11.925816 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 07:06:11.944678 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem --> /usr/share/ca-certificates/1599255.pem (1338 bytes)
	I1216 07:06:11.966397 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /usr/share/ca-certificates/15992552.pem (1708 bytes)
	I1216 07:06:11.991349 1687487 ssh_runner.go:195] Run: openssl version
	I1216 07:06:11.998038 1687487 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:06:12.010525 1687487 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 07:06:12.021207 1687487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:06:12.026113 1687487 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:06:12.026229 1687487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:06:12.070208 1687487 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 07:06:12.077832 1687487 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1599255.pem
	I1216 07:06:12.085281 1687487 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1599255.pem /etc/ssl/certs/1599255.pem
	I1216 07:06:12.093355 1687487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1599255.pem
	I1216 07:06:12.097389 1687487 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 06:24 /usr/share/ca-certificates/1599255.pem
	I1216 07:06:12.097457 1687487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1599255.pem
	I1216 07:06:12.138619 1687487 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 07:06:12.146494 1687487 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15992552.pem
	I1216 07:06:12.153809 1687487 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15992552.pem /etc/ssl/certs/15992552.pem
	I1216 07:06:12.162460 1687487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15992552.pem
	I1216 07:06:12.166549 1687487 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 06:24 /usr/share/ca-certificates/15992552.pem
	I1216 07:06:12.166660 1687487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15992552.pem
	I1216 07:06:12.214872 1687487 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 07:06:12.223038 1687487 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 07:06:12.226786 1687487 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 07:06:12.226832 1687487 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.2  false true} ...
	I1216 07:06:12.226911 1687487 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-614518-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-614518 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 07:06:12.227009 1687487 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 07:06:12.235141 1687487 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 07:06:12.235238 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1216 07:06:12.243052 1687487 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1216 07:06:12.258163 1687487 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 07:06:12.272841 1687487 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1216 07:06:12.276276 1687487 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 07:06:12.286557 1687487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 07:06:12.414923 1687487 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 07:06:12.430788 1687487 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime: ControlPlane:false Worker:true}
	I1216 07:06:12.431230 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:06:12.434498 1687487 out.go:179] * Verifying Kubernetes components...
	I1216 07:06:12.437537 1687487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 07:06:12.560193 1687487 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 07:06:12.575224 1687487 kapi.go:59] client config for ha-614518: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/client.crt", KeyFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/client.key", CAFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1216 07:06:12.575297 1687487 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1216 07:06:12.575574 1687487 node_ready.go:35] waiting up to 6m0s for node "ha-614518-m04" to be "Ready" ...
	I1216 07:06:12.580068 1687487 node_ready.go:49] node "ha-614518-m04" is "Ready"
	I1216 07:06:12.580146 1687487 node_ready.go:38] duration metric: took 4.550298ms for node "ha-614518-m04" to be "Ready" ...
	I1216 07:06:12.580174 1687487 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 07:06:12.580258 1687487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 07:06:12.596724 1687487 system_svc.go:56] duration metric: took 16.541875ms WaitForService to wait for kubelet
	I1216 07:06:12.596751 1687487 kubeadm.go:587] duration metric: took 165.918494ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 07:06:12.596771 1687487 node_conditions.go:102] verifying NodePressure condition ...
	I1216 07:06:12.600376 1687487 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1216 07:06:12.600404 1687487 node_conditions.go:123] node cpu capacity is 2
	I1216 07:06:12.600416 1687487 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1216 07:06:12.600421 1687487 node_conditions.go:123] node cpu capacity is 2
	I1216 07:06:12.600449 1687487 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1216 07:06:12.600453 1687487 node_conditions.go:123] node cpu capacity is 2
	I1216 07:06:12.600511 1687487 node_conditions.go:105] duration metric: took 3.699966ms to run NodePressure ...
	I1216 07:06:12.600548 1687487 start.go:242] waiting for startup goroutines ...
	I1216 07:06:12.600573 1687487 start.go:256] writing updated cluster config ...
	I1216 07:06:12.600919 1687487 ssh_runner.go:195] Run: rm -f paused
	I1216 07:06:12.604585 1687487 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 07:06:12.605147 1687487 kapi.go:59] client config for ha-614518: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/client.crt", KeyFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/client.key", CAFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 07:06:12.622024 1687487 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-j2dlk" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 07:06:14.630183 1687487 pod_ready.go:104] pod "coredns-66bc5c9577-j2dlk" is not "Ready", error: <nil>
	W1216 07:06:17.128396 1687487 pod_ready.go:104] pod "coredns-66bc5c9577-j2dlk" is not "Ready", error: <nil>
	W1216 07:06:19.129109 1687487 pod_ready.go:104] pod "coredns-66bc5c9577-j2dlk" is not "Ready", error: <nil>
	W1216 07:06:21.129471 1687487 pod_ready.go:104] pod "coredns-66bc5c9577-j2dlk" is not "Ready", error: <nil>
	W1216 07:06:23.629238 1687487 pod_ready.go:104] pod "coredns-66bc5c9577-j2dlk" is not "Ready", error: <nil>
	I1216 07:06:24.644123 1687487 pod_ready.go:94] pod "coredns-66bc5c9577-j2dlk" is "Ready"
	I1216 07:06:24.644155 1687487 pod_ready.go:86] duration metric: took 12.022101955s for pod "coredns-66bc5c9577-j2dlk" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:24.644167 1687487 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wnl5v" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:25.653985 1687487 pod_ready.go:94] pod "coredns-66bc5c9577-wnl5v" is "Ready"
	I1216 07:06:25.654011 1687487 pod_ready.go:86] duration metric: took 1.009837557s for pod "coredns-66bc5c9577-wnl5v" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:25.657436 1687487 pod_ready.go:83] waiting for pod "etcd-ha-614518" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:25.663112 1687487 pod_ready.go:94] pod "etcd-ha-614518" is "Ready"
	I1216 07:06:25.663199 1687487 pod_ready.go:86] duration metric: took 5.737586ms for pod "etcd-ha-614518" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:25.663224 1687487 pod_ready.go:83] waiting for pod "etcd-ha-614518-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:25.668572 1687487 pod_ready.go:94] pod "etcd-ha-614518-m02" is "Ready"
	I1216 07:06:25.668654 1687487 pod_ready.go:86] duration metric: took 5.405889ms for pod "etcd-ha-614518-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:25.668681 1687487 pod_ready.go:83] waiting for pod "etcd-ha-614518-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:25.673835 1687487 pod_ready.go:99] pod "etcd-ha-614518-m03" in "kube-system" namespace is gone: node "ha-614518-m03" hosting pod "etcd-ha-614518-m03" is not found/running (skipping!): nodes "ha-614518-m03" not found
	I1216 07:06:25.673908 1687487 pod_ready.go:86] duration metric: took 5.206207ms for pod "etcd-ha-614518-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:25.823380 1687487 request.go:683] "Waited before sending request" delay="149.293024ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1216 07:06:25.826990 1687487 pod_ready.go:83] waiting for pod "kube-apiserver-ha-614518" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:26.023449 1687487 request.go:683] "Waited before sending request" delay="196.318606ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-614518"
	I1216 07:06:26.223386 1687487 request.go:683] "Waited before sending request" delay="196.351246ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518"
	I1216 07:06:26.226414 1687487 pod_ready.go:94] pod "kube-apiserver-ha-614518" is "Ready"
	I1216 07:06:26.226443 1687487 pod_ready.go:86] duration metric: took 399.426362ms for pod "kube-apiserver-ha-614518" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:26.226454 1687487 pod_ready.go:83] waiting for pod "kube-apiserver-ha-614518-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:26.422838 1687487 request.go:683] "Waited before sending request" delay="196.262613ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-614518-m02"
	I1216 07:06:26.623137 1687487 request.go:683] "Waited before sending request" delay="197.08654ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518-m02"
	I1216 07:06:26.626398 1687487 pod_ready.go:94] pod "kube-apiserver-ha-614518-m02" is "Ready"
	I1216 07:06:26.626428 1687487 pod_ready.go:86] duration metric: took 399.966937ms for pod "kube-apiserver-ha-614518-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:26.626438 1687487 pod_ready.go:83] waiting for pod "kube-apiserver-ha-614518-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:26.822787 1687487 request.go:683] "Waited before sending request" delay="196.265148ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-614518-m03"
	I1216 07:06:27.023430 1687487 request.go:683] "Waited before sending request" delay="197.365ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518-m03"
	I1216 07:06:27.026875 1687487 pod_ready.go:99] pod "kube-apiserver-ha-614518-m03" in "kube-system" namespace is gone: node "ha-614518-m03" hosting pod "kube-apiserver-ha-614518-m03" is not found/running (skipping!): nodes "ha-614518-m03" not found
	I1216 07:06:27.026914 1687487 pod_ready.go:86] duration metric: took 400.4598ms for pod "kube-apiserver-ha-614518-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:27.223376 1687487 request.go:683] "Waited before sending request" delay="196.348931ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1216 07:06:27.227355 1687487 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-614518" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:27.423607 1687487 request.go:683] "Waited before sending request" delay="196.15765ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-614518"
	I1216 07:06:27.623198 1687487 request.go:683] "Waited before sending request" delay="196.252798ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518"
	I1216 07:06:27.822756 1687487 request.go:683] "Waited before sending request" delay="94.181569ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-614518"
	I1216 07:06:28.023498 1687487 request.go:683] "Waited before sending request" delay="197.337742ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518"
	I1216 07:06:28.423277 1687487 request.go:683] "Waited before sending request" delay="191.324919ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518"
	I1216 07:06:28.823130 1687487 request.go:683] "Waited before sending request" delay="90.229358ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518"
	W1216 07:06:29.235219 1687487 pod_ready.go:104] pod "kube-controller-manager-ha-614518" is not "Ready", error: <nil>
	W1216 07:06:31.235951 1687487 pod_ready.go:104] pod "kube-controller-manager-ha-614518" is not "Ready", error: <nil>
	W1216 07:06:33.734756 1687487 pod_ready.go:104] pod "kube-controller-manager-ha-614518" is not "Ready", error: <nil>
	W1216 07:06:35.735390 1687487 pod_ready.go:104] pod "kube-controller-manager-ha-614518" is not "Ready", error: <nil>
	W1216 07:06:38.234527 1687487 pod_ready.go:104] pod "kube-controller-manager-ha-614518" is not "Ready", error: <nil>
	W1216 07:06:40.734172 1687487 pod_ready.go:104] pod "kube-controller-manager-ha-614518" is not "Ready", error: <nil>
	W1216 07:06:42.734590 1687487 pod_ready.go:104] pod "kube-controller-manager-ha-614518" is not "Ready", error: <nil>
	I1216 07:06:43.234658 1687487 pod_ready.go:94] pod "kube-controller-manager-ha-614518" is "Ready"
	I1216 07:06:43.234687 1687487 pod_ready.go:86] duration metric: took 16.007305361s for pod "kube-controller-manager-ha-614518" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.234697 1687487 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-614518-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.246154 1687487 pod_ready.go:94] pod "kube-controller-manager-ha-614518-m02" is "Ready"
	I1216 07:06:43.246184 1687487 pod_ready.go:86] duration metric: took 11.479167ms for pod "kube-controller-manager-ha-614518-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.246194 1687487 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-614518-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.251708 1687487 pod_ready.go:99] pod "kube-controller-manager-ha-614518-m03" in "kube-system" namespace is gone: node "ha-614518-m03" hosting pod "kube-controller-manager-ha-614518-m03" is not found/running (skipping!): nodes "ha-614518-m03" not found
	I1216 07:06:43.251789 1687487 pod_ready.go:86] duration metric: took 5.587232ms for pod "kube-controller-manager-ha-614518-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.255005 1687487 pod_ready.go:83] waiting for pod "kube-proxy-4kdt5" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.260772 1687487 pod_ready.go:94] pod "kube-proxy-4kdt5" is "Ready"
	I1216 07:06:43.260800 1687487 pod_ready.go:86] duration metric: took 5.764523ms for pod "kube-proxy-4kdt5" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.260811 1687487 pod_ready.go:83] waiting for pod "kube-proxy-bmxpt" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.427957 1687487 request.go:683] "Waited before sending request" delay="164.183098ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518-m04"
	I1216 07:06:43.431695 1687487 pod_ready.go:94] pod "kube-proxy-bmxpt" is "Ready"
	I1216 07:06:43.431727 1687487 pod_ready.go:86] duration metric: took 170.908436ms for pod "kube-proxy-bmxpt" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.431744 1687487 pod_ready.go:83] waiting for pod "kube-proxy-fhwcs" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.628038 1687487 request.go:683] "Waited before sending request" delay="196.208729ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fhwcs"
	I1216 07:06:43.827976 1687487 request.go:683] "Waited before sending request" delay="196.30094ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518-m02"
	I1216 07:06:43.837294 1687487 pod_ready.go:94] pod "kube-proxy-fhwcs" is "Ready"
	I1216 07:06:43.837327 1687487 pod_ready.go:86] duration metric: took 405.576793ms for pod "kube-proxy-fhwcs" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.837339 1687487 pod_ready.go:83] waiting for pod "kube-proxy-qqr57" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:44.028582 1687487 request.go:683] "Waited before sending request" delay="191.164568ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qqr57"
	I1216 07:06:44.031704 1687487 pod_ready.go:99] pod "kube-proxy-qqr57" in "kube-system" namespace is gone: getting pod "kube-proxy-qqr57" in "kube-system" namespace (will retry): pods "kube-proxy-qqr57" not found
	I1216 07:06:44.031728 1687487 pod_ready.go:86] duration metric: took 194.382484ms for pod "kube-proxy-qqr57" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:44.228023 1687487 request.go:683] "Waited before sending request" delay="196.190299ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I1216 07:06:44.234797 1687487 pod_ready.go:83] waiting for pod "kube-scheduler-ha-614518" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:44.428282 1687487 request.go:683] "Waited before sending request" delay="193.336711ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-614518"
	I1216 07:06:44.627997 1687487 request.go:683] "Waited before sending request" delay="196.267207ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518"
	I1216 07:06:44.631577 1687487 pod_ready.go:94] pod "kube-scheduler-ha-614518" is "Ready"
	I1216 07:06:44.631604 1687487 pod_ready.go:86] duration metric: took 396.729655ms for pod "kube-scheduler-ha-614518" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:44.631613 1687487 pod_ready.go:83] waiting for pod "kube-scheduler-ha-614518-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:44.828815 1687487 request.go:683] "Waited before sending request" delay="197.130733ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-614518-m02"
	I1216 07:06:45.028338 1687487 request.go:683] "Waited before sending request" delay="191.46624ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518-m02"
	I1216 07:06:45.228724 1687487 request.go:683] "Waited before sending request" delay="96.318053ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-614518-m02"
	I1216 07:06:45.428563 1687487 request.go:683] "Waited before sending request" delay="191.750075ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518-m02"
	I1216 07:06:45.828353 1687487 request.go:683] "Waited before sending request" delay="192.34026ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518-m02"
	I1216 07:06:46.228325 1687487 request.go:683] "Waited before sending request" delay="93.248724ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518-m02"
	W1216 07:06:46.637948 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:06:49.139119 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:06:51.638109 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:06:53.638454 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:06:56.139011 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:06:58.638095 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:00.638769 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:03.139265 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:05.638593 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:07.638799 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:10.138642 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:12.638602 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:14.641618 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:17.139071 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:19.638792 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:22.138682 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:24.143581 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:26.637942 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:28.638514 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:30.639228 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:32.639571 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:35.139503 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:37.142108 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:39.637866 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:41.638931 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:44.139294 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:46.638205 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:48.638829 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:50.643744 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:53.139962 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:55.140229 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:57.638356 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:00.161064 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:02.638288 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:04.640454 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:07.138771 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:09.638023 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:11.638274 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:13.638989 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:16.137649 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:18.138649 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:20.138856 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:22.638044 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:25.139148 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:27.638438 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:29.638561 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:31.638878 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:34.138583 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:36.638791 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:39.138672 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:41.143386 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:43.638185 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:45.640021 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:48.137933 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:50.638587 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:53.138384 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:55.138692 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:57.638524 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:00.191960 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:02.638290 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:04.639287 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:07.139404 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:09.638715 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:12.137968 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:14.138290 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:16.138420 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:18.638585 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:20.639656 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:23.138623 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:25.638409 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:27.643066 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:30.140779 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:32.638747 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:34.639250 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:37.137644 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:39.138045 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:41.138733 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:43.139171 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:45.142012 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:47.638719 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:50.139130 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:52.637794 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:54.638451 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:57.137807 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:59.640347 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:10:02.138615 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:10:04.140843 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:10:06.639153 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:10:09.139049 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:10:11.139172 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	I1216 07:10:12.605718 1687487 pod_ready.go:86] duration metric: took 3m27.974087596s for pod "kube-scheduler-ha-614518-m02" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 07:10:12.605749 1687487 pod_ready.go:65] not all pods in "kube-system" namespace with "component=kube-scheduler" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1216 07:10:12.605764 1687487 pod_ready.go:40] duration metric: took 4m0.001147095s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 07:10:12.608877 1687487 out.go:203] 
	W1216 07:10:12.611764 1687487 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1216 07:10:12.614690 1687487 out.go:203] 
	
	
	==> CRI-O <==
	Dec 16 07:06:33 ha-614518 crio[669]: time="2025-12-16T07:06:33.124962814Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 16 07:06:33 ha-614518 crio[669]: time="2025-12-16T07:06:33.124989079Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 16 07:06:33 ha-614518 crio[669]: time="2025-12-16T07:06:33.128952589Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 16 07:06:33 ha-614518 crio[669]: time="2025-12-16T07:06:33.128991022Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 16 07:06:33 ha-614518 crio[669]: time="2025-12-16T07:06:33.12901366Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 16 07:06:33 ha-614518 crio[669]: time="2025-12-16T07:06:33.132385483Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 16 07:06:33 ha-614518 crio[669]: time="2025-12-16T07:06:33.132445241Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 16 07:06:33 ha-614518 crio[669]: time="2025-12-16T07:06:33.132506854Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 16 07:06:33 ha-614518 crio[669]: time="2025-12-16T07:06:33.13550428Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 16 07:06:33 ha-614518 crio[669]: time="2025-12-16T07:06:33.135541393Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 16 07:06:40 ha-614518 conmon[1338]: conmon 5fb83a33391310c66121 <ninfo>: container 1340 exited with status 1
	Dec 16 07:06:41 ha-614518 crio[669]: time="2025-12-16T07:06:41.136243764Z" level=info msg="Removing container: 4b611d8c213d6b291fb7a3b72450bf97b5b458e31038413638c4e1e9a6beaaf7" id=a7929592-8844-46c4-be42-8dc29f75bdf8 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 07:06:41 ha-614518 crio[669]: time="2025-12-16T07:06:41.144051183Z" level=info msg="Error loading conmon cgroup of container 4b611d8c213d6b291fb7a3b72450bf97b5b458e31038413638c4e1e9a6beaaf7: cgroup deleted" id=a7929592-8844-46c4-be42-8dc29f75bdf8 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 07:06:41 ha-614518 crio[669]: time="2025-12-16T07:06:41.148857672Z" level=info msg="Removed container 4b611d8c213d6b291fb7a3b72450bf97b5b458e31038413638c4e1e9a6beaaf7: kube-system/storage-provisioner/storage-provisioner" id=a7929592-8844-46c4-be42-8dc29f75bdf8 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 07:07:21 ha-614518 crio[669]: time="2025-12-16T07:07:21.517109075Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=970ca7aa-d95d-4794-95bf-de423f4d674f name=/runtime.v1.ImageService/ImageStatus
	Dec 16 07:07:21 ha-614518 crio[669]: time="2025-12-16T07:07:21.51851262Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a32745bb-0259-4638-a2da-ddc22003b22b name=/runtime.v1.ImageService/ImageStatus
	Dec 16 07:07:21 ha-614518 crio[669]: time="2025-12-16T07:07:21.519651775Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=d94c0706-3299-449d-b1bc-9c7684af150f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 07:07:21 ha-614518 crio[669]: time="2025-12-16T07:07:21.519773393Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 07:07:21 ha-614518 crio[669]: time="2025-12-16T07:07:21.524607418Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 07:07:21 ha-614518 crio[669]: time="2025-12-16T07:07:21.524785537Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/312caaa5394938283ea578f1d27f8818b3e8f0134608b0a17d956f12767c2e19/merged/etc/passwd: no such file or directory"
	Dec 16 07:07:21 ha-614518 crio[669]: time="2025-12-16T07:07:21.524806846Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/312caaa5394938283ea578f1d27f8818b3e8f0134608b0a17d956f12767c2e19/merged/etc/group: no such file or directory"
	Dec 16 07:07:21 ha-614518 crio[669]: time="2025-12-16T07:07:21.525065295Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 07:07:21 ha-614518 crio[669]: time="2025-12-16T07:07:21.544389448Z" level=info msg="Created container 1093de574e036685973850230e9a40aa67d2a34b14bfd15aac259b4e32258a56: kube-system/storage-provisioner/storage-provisioner" id=d94c0706-3299-449d-b1bc-9c7684af150f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 07:07:21 ha-614518 crio[669]: time="2025-12-16T07:07:21.546147943Z" level=info msg="Starting container: 1093de574e036685973850230e9a40aa67d2a34b14bfd15aac259b4e32258a56" id=2384d913-1660-45b4-a9e4-4a12ccf89aa1 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 07:07:21 ha-614518 crio[669]: time="2025-12-16T07:07:21.552972588Z" level=info msg="Started container" PID=1546 containerID=1093de574e036685973850230e9a40aa67d2a34b14bfd15aac259b4e32258a56 description=kube-system/storage-provisioner/storage-provisioner id=2384d913-1660-45b4-a9e4-4a12ccf89aa1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=66940fef199cd7ea95fa467d76afd336228ac898a0c1f0e8c7b18e7972031eff
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	1093de574e036       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   2 minutes ago       Running             storage-provisioner       7                   66940fef199cd       storage-provisioner                 kube-system
	62f5148caf573       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   3 minutes ago       Running             kube-controller-manager   8                   b4a4e435e1aa0       kube-controller-manager-ha-614518   kube-system
	5fb83a3339131       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   4 minutes ago       Exited              storage-provisioner       6                   66940fef199cd       storage-provisioner                 kube-system
	95092e298b4a2       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   4 minutes ago       Exited              kube-controller-manager   7                   b4a4e435e1aa0       kube-controller-manager-ha-614518   kube-system
	d39155885e822       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   6 minutes ago       Running             coredns                   2                   041859eb301b3       coredns-66bc5c9577-j2dlk            kube-system
	6e64e350bfcdb       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   6 minutes ago       Running             kindnet-cni               2                   ceeed389a3540       kindnet-t2849                       kube-system
	df7febb900c92       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   6 minutes ago       Running             kube-proxy                2                   e6f1de1edc5ee       kube-proxy-4kdt5                    kube-system
	e3a995a401390       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   6 minutes ago       Running             busybox                   2                   288cd575c38a7       busybox-7b57f96db7-9rkhz            default
	a0d878c4d93ed       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   6 minutes ago       Running             coredns                   2                   6735c66af1b27       coredns-66bc5c9577-wnl5v            kube-system
	11e4b44d62d54       369db9dfa6fa96c1f4a0f3c827dbe864b5ded1802c8b4810b5ff9fcc5f5f2c70   6 minutes ago       Running             kube-vip                  2                   01654879d92ce       kube-vip-ha-614518                  kube-system
	b6e4d702970e6       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   6 minutes ago       Running             etcd                      2                   b24e85033a9a6       etcd-ha-614518                      kube-system
	c0e9d15ebb1cd       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   6 minutes ago       Running             kube-scheduler            2                   2ec038c0eb369       kube-scheduler-ha-614518            kube-system
	db591d0d437f8       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   6 minutes ago       Running             kube-apiserver            2                   3ea7ac550801f       kube-apiserver-ha-614518            kube-system
	
	
	==> coredns [a0d878c4d93ed5aa6b99a6ea96df4f5ccb53c918a3bac903f7dae29fc1cf61ee] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [d39155885e822c355840ab6f40d6597b04bb705e1978f74a686ce74f90174ae9] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-614518
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-614518
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f
	                    minikube.k8s.io/name=ha-614518
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T06_55_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 06:55:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-614518
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 07:10:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 07:06:25 +0000   Tue, 16 Dec 2025 06:55:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 07:06:25 +0000   Tue, 16 Dec 2025 06:55:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 07:06:25 +0000   Tue, 16 Dec 2025 06:55:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 07:06:25 +0000   Tue, 16 Dec 2025 07:02:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-614518
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                95037a50-a335-45c4-b961-153de44dd8af
	  Boot ID:                    c02b8f3a-b639-46a9-b38c-18c198a7a8c0
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-9rkhz             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-j2dlk             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 coredns-66bc5c9577-wnl5v             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 etcd-ha-614518                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-t2849                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-614518             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-614518    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-4kdt5                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-614518             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-614518                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m13s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 14m                    kube-proxy       
	  Normal   Starting                 3m54s                  kube-proxy       
	  Normal   Starting                 8m12s                  kube-proxy       
	  Warning  CgroupV1                 14m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    14m                    kubelet          Node ha-614518 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  14m                    kubelet          Node ha-614518 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     14m                    kubelet          Node ha-614518 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           14m                    node-controller  Node ha-614518 event: Registered Node ha-614518 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-614518 event: Registered Node ha-614518 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-614518 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-614518 event: Registered Node ha-614518 in Controller
	  Normal   RegisteredNode           9m17s                  node-controller  Node ha-614518 event: Registered Node ha-614518 in Controller
	  Normal   NodeHasSufficientPID     8m47s (x8 over 8m47s)  kubelet          Node ha-614518 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  8m47s (x8 over 8m47s)  kubelet          Node ha-614518 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m47s (x8 over 8m47s)  kubelet          Node ha-614518 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           7m52s                  node-controller  Node ha-614518 event: Registered Node ha-614518 in Controller
	  Normal   RegisteredNode           7m43s                  node-controller  Node ha-614518 event: Registered Node ha-614518 in Controller
	  Normal   RegisteredNode           7m31s                  node-controller  Node ha-614518 event: Registered Node ha-614518 in Controller
	  Normal   Starting                 6m22s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m22s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m22s (x8 over 6m22s)  kubelet          Node ha-614518 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m22s (x8 over 6m22s)  kubelet          Node ha-614518 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m22s (x8 over 6m22s)  kubelet          Node ha-614518 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m31s                  node-controller  Node ha-614518 event: Registered Node ha-614518 in Controller
	  Normal   RegisteredNode           3m42s                  node-controller  Node ha-614518 event: Registered Node ha-614518 in Controller
	
	
	Name:               ha-614518-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-614518-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f
	                    minikube.k8s.io/name=ha-614518
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_16T06_56_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 06:56:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-614518-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 07:10:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 07:08:44 +0000   Tue, 16 Dec 2025 07:00:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 07:08:44 +0000   Tue, 16 Dec 2025 07:00:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 07:08:44 +0000   Tue, 16 Dec 2025 07:00:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 07:08:44 +0000   Tue, 16 Dec 2025 07:00:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-614518-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                0e50aad9-c8f5-4539-a363-29b4940497ef
	  Boot ID:                    c02b8f3a-b639-46a9-b38c-18c198a7a8c0
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-q9kjv                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-614518-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-qpdxp                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-614518-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-614518-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-fhwcs                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-614518-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-614518-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 4m7s                   kube-proxy       
	  Normal   Starting                 8m1s                   kube-proxy       
	  Normal   RegisteredNode           13m                    node-controller  Node ha-614518-m02 event: Registered Node ha-614518-m02 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-614518-m02 event: Registered Node ha-614518-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-614518-m02 event: Registered Node ha-614518-m02 in Controller
	  Normal   NodeHasNoDiskPressure    9m54s (x8 over 9m54s)  kubelet          Node ha-614518-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  9m54s (x8 over 9m54s)  kubelet          Node ha-614518-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     9m54s (x8 over 9m54s)  kubelet          Node ha-614518-m02 status is now: NodeHasSufficientPID
	  Normal   Starting                 9m54s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m54s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeNotReady             9m29s                  node-controller  Node ha-614518-m02 status is now: NodeNotReady
	  Normal   RegisteredNode           9m17s                  node-controller  Node ha-614518-m02 event: Registered Node ha-614518-m02 in Controller
	  Warning  CgroupV1                 8m44s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 8m44s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     8m43s (x8 over 8m44s)  kubelet          Node ha-614518-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  8m43s (x8 over 8m44s)  kubelet          Node ha-614518-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m43s (x8 over 8m44s)  kubelet          Node ha-614518-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           7m52s                  node-controller  Node ha-614518-m02 event: Registered Node ha-614518-m02 in Controller
	  Normal   RegisteredNode           7m43s                  node-controller  Node ha-614518-m02 event: Registered Node ha-614518-m02 in Controller
	  Normal   RegisteredNode           7m31s                  node-controller  Node ha-614518-m02 event: Registered Node ha-614518-m02 in Controller
	  Normal   Starting                 6m19s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m19s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m19s (x8 over 6m19s)  kubelet          Node ha-614518-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m19s (x8 over 6m19s)  kubelet          Node ha-614518-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m19s (x8 over 6m19s)  kubelet          Node ha-614518-m02 status is now: NodeHasSufficientPID
	  Warning  ContainerGCFailed        5m19s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m31s                  node-controller  Node ha-614518-m02 event: Registered Node ha-614518-m02 in Controller
	  Normal   RegisteredNode           3m42s                  node-controller  Node ha-614518-m02 event: Registered Node ha-614518-m02 in Controller
	
	
	Name:               ha-614518-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-614518-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f
	                    minikube.k8s.io/name=ha-614518
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_16T06_58_56_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 06:58:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-614518-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 07:10:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 07:08:41 +0000   Tue, 16 Dec 2025 06:58:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 07:08:41 +0000   Tue, 16 Dec 2025 06:58:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 07:08:41 +0000   Tue, 16 Dec 2025 06:58:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 07:08:41 +0000   Tue, 16 Dec 2025 06:59:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-614518-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                b5a1c428-1aac-458a-ac8c-b2278f4653df
	  Boot ID:                    c02b8f3a-b639-46a9-b38c-18c198a7a8c0
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-d8h6z    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m18s
	  kube-system                 kindnet-kwm49               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-proxy-bmxpt            0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m20s                  kube-proxy       
	  Normal   Starting                 11m                    kube-proxy       
	  Normal   Starting                 3m50s                  kube-proxy       
	  Normal   NodeHasSufficientPID     11m (x3 over 11m)      kubelet          Node ha-614518-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    11m (x3 over 11m)      kubelet          Node ha-614518-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  11m (x3 over 11m)      kubelet          Node ha-614518-m04 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 11m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   RegisteredNode           11m                    node-controller  Node ha-614518-m04 event: Registered Node ha-614518-m04 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-614518-m04 event: Registered Node ha-614518-m04 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-614518-m04 event: Registered Node ha-614518-m04 in Controller
	  Normal   NodeReady                10m                    kubelet          Node ha-614518-m04 status is now: NodeReady
	  Normal   RegisteredNode           9m17s                  node-controller  Node ha-614518-m04 event: Registered Node ha-614518-m04 in Controller
	  Normal   RegisteredNode           7m52s                  node-controller  Node ha-614518-m04 event: Registered Node ha-614518-m04 in Controller
	  Warning  CgroupV1                 7m44s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 7m44s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           7m43s                  node-controller  Node ha-614518-m04 event: Registered Node ha-614518-m04 in Controller
	  Normal   NodeHasNoDiskPressure    7m40s (x8 over 7m43s)  kubelet          Node ha-614518-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  7m40s (x8 over 7m43s)  kubelet          Node ha-614518-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     7m40s (x8 over 7m43s)  kubelet          Node ha-614518-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m31s                  node-controller  Node ha-614518-m04 event: Registered Node ha-614518-m04 in Controller
	  Normal   RegisteredNode           4m31s                  node-controller  Node ha-614518-m04 event: Registered Node ha-614518-m04 in Controller
	  Normal   Starting                 4m8s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m8s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m5s (x8 over 4m8s)    kubelet          Node ha-614518-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m5s (x8 over 4m8s)    kubelet          Node ha-614518-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m5s (x8 over 4m8s)    kubelet          Node ha-614518-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m42s                  node-controller  Node ha-614518-m04 event: Registered Node ha-614518-m04 in Controller
	
	
	==> dmesg <==
	[Dec16 06:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec16 06:13] overlayfs: idmapped layers are currently not supported
	[Dec16 06:19] overlayfs: idmapped layers are currently not supported
	[Dec16 06:20] overlayfs: idmapped layers are currently not supported
	[Dec16 06:38] overlayfs: idmapped layers are currently not supported
	[Dec16 06:55] overlayfs: idmapped layers are currently not supported
	[Dec16 06:56] overlayfs: idmapped layers are currently not supported
	[Dec16 06:57] overlayfs: idmapped layers are currently not supported
	[Dec16 06:58] overlayfs: idmapped layers are currently not supported
	[Dec16 07:00] overlayfs: idmapped layers are currently not supported
	[Dec16 07:01] overlayfs: idmapped layers are currently not supported
	[  +3.826905] overlayfs: idmapped layers are currently not supported
	[Dec16 07:02] overlayfs: idmapped layers are currently not supported
	[ +35.241631] overlayfs: idmapped layers are currently not supported
	[Dec16 07:03] overlayfs: idmapped layers are currently not supported
	[  +2.815105] overlayfs: idmapped layers are currently not supported
	[Dec16 07:06] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b6e4d702970e634028ab9da9ca8e258d02bb0aa908a74a428d72bd35cdec320d] <==
	{"level":"info","ts":"2025-12-16T07:05:37.111144Z","caller":"traceutil/trace.go:172","msg":"trace[478204759] range","detail":"{range_begin:/registry/configmaps; range_end:; response_count:0; response_revision:2579; }","duration":"153.657765ms","start":"2025-12-16T07:05:36.957482Z","end":"2025-12-16T07:05:37.111139Z","steps":["trace[478204759] 'agreement among raft nodes before linearized reading'  (duration: 153.641888ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.111265Z","caller":"traceutil/trace.go:172","msg":"trace[1751390839] range","detail":"{range_begin:/registry/deployments/; range_end:/registry/deployments0; response_count:2; response_revision:2579; }","duration":"153.79544ms","start":"2025-12-16T07:05:36.957465Z","end":"2025-12-16T07:05:37.111260Z","steps":["trace[1751390839] 'agreement among raft nodes before linearized reading'  (duration: 153.738282ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.111362Z","caller":"traceutil/trace.go:172","msg":"trace[28163213] range","detail":"{range_begin:/registry/resourceclaims/; range_end:/registry/resourceclaims0; response_count:0; response_revision:2579; }","duration":"153.909944ms","start":"2025-12-16T07:05:36.957447Z","end":"2025-12-16T07:05:37.111357Z","steps":["trace[28163213] 'agreement among raft nodes before linearized reading'  (duration: 153.891507ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.111450Z","caller":"traceutil/trace.go:172","msg":"trace[1848148655] range","detail":"{range_begin:/registry/volumeattachments; range_end:; response_count:0; response_revision:2579; }","duration":"154.017343ms","start":"2025-12-16T07:05:36.957428Z","end":"2025-12-16T07:05:37.111446Z","steps":["trace[1848148655] 'agreement among raft nodes before linearized reading'  (duration: 154.000407ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.111616Z","caller":"traceutil/trace.go:172","msg":"trace[2000339098] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; response_count:8; response_revision:2579; }","duration":"154.198949ms","start":"2025-12-16T07:05:36.957412Z","end":"2025-12-16T07:05:37.111611Z","steps":["trace[2000339098] 'agreement among raft nodes before linearized reading'  (duration: 154.103251ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.111724Z","caller":"traceutil/trace.go:172","msg":"trace[1572280958] range","detail":"{range_begin:/registry/validatingwebhookconfigurations/; range_end:/registry/validatingwebhookconfigurations0; response_count:0; response_revision:2579; }","duration":"154.327336ms","start":"2025-12-16T07:05:36.957392Z","end":"2025-12-16T07:05:37.111719Z","steps":["trace[1572280958] 'agreement among raft nodes before linearized reading'  (duration: 154.307217ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.111819Z","caller":"traceutil/trace.go:172","msg":"trace[1774297222] range","detail":"{range_begin:/registry/namespaces; range_end:; response_count:0; response_revision:2579; }","duration":"154.438599ms","start":"2025-12-16T07:05:36.957375Z","end":"2025-12-16T07:05:37.111814Z","steps":["trace[1774297222] 'agreement among raft nodes before linearized reading'  (duration: 154.420826ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.111903Z","caller":"traceutil/trace.go:172","msg":"trace[2023288818] range","detail":"{range_begin:/registry/services/specs; range_end:; response_count:0; response_revision:2579; }","duration":"154.540352ms","start":"2025-12-16T07:05:36.957359Z","end":"2025-12-16T07:05:37.111899Z","steps":["trace[2023288818] 'agreement among raft nodes before linearized reading'  (duration: 154.524622ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.111988Z","caller":"traceutil/trace.go:172","msg":"trace[1206328630] range","detail":"{range_begin:/registry/secrets; range_end:; response_count:0; response_revision:2579; }","duration":"154.643664ms","start":"2025-12-16T07:05:36.957341Z","end":"2025-12-16T07:05:37.111985Z","steps":["trace[1206328630] 'agreement among raft nodes before linearized reading'  (duration: 154.626515ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.112132Z","caller":"traceutil/trace.go:172","msg":"trace[1906072706] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; response_count:11; response_revision:2579; }","duration":"154.802829ms","start":"2025-12-16T07:05:36.957325Z","end":"2025-12-16T07:05:37.112128Z","steps":["trace[1906072706] 'agreement among raft nodes before linearized reading'  (duration: 154.723689ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.112237Z","caller":"traceutil/trace.go:172","msg":"trace[871907471] range","detail":"{range_begin:/registry/statefulsets; range_end:; response_count:0; response_revision:2579; }","duration":"154.926703ms","start":"2025-12-16T07:05:36.957305Z","end":"2025-12-16T07:05:37.112231Z","steps":["trace[871907471] 'agreement among raft nodes before linearized reading'  (duration: 154.909981ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.112324Z","caller":"traceutil/trace.go:172","msg":"trace[864616195] range","detail":"{range_begin:/registry/persistentvolumes/; range_end:/registry/persistentvolumes0; response_count:0; response_revision:2579; }","duration":"155.901796ms","start":"2025-12-16T07:05:36.956418Z","end":"2025-12-16T07:05:37.112320Z","steps":["trace[864616195] 'agreement among raft nodes before linearized reading'  (duration: 155.884368ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.112450Z","caller":"traceutil/trace.go:172","msg":"trace[1944004236] range","detail":"{range_begin:/registry/certificatesigningrequests/; range_end:/registry/certificatesigningrequests0; response_count:4; response_revision:2579; }","duration":"156.045585ms","start":"2025-12-16T07:05:36.956400Z","end":"2025-12-16T07:05:37.112445Z","steps":["trace[1944004236] 'agreement among raft nodes before linearized reading'  (duration: 155.989461ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.117587Z","caller":"traceutil/trace.go:172","msg":"trace[1696147218] range","detail":"{range_begin:/registry/priorityclasses/; range_end:/registry/priorityclasses0; response_count:2; response_revision:2579; }","duration":"161.197491ms","start":"2025-12-16T07:05:36.956382Z","end":"2025-12-16T07:05:37.117580Z","steps":["trace[1696147218] 'agreement among raft nodes before linearized reading'  (duration: 161.11758ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.117791Z","caller":"traceutil/trace.go:172","msg":"trace[223674650] range","detail":"{range_begin:/registry/rolebindings/; range_end:/registry/rolebindings0; response_count:12; response_revision:2579; }","duration":"161.419196ms","start":"2025-12-16T07:05:36.956367Z","end":"2025-12-16T07:05:37.117786Z","steps":["trace[223674650] 'agreement among raft nodes before linearized reading'  (duration: 161.346531ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.117988Z","caller":"traceutil/trace.go:172","msg":"trace[185344251] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/; range_end:/registry/apiregistration.k8s.io/apiservices0; response_count:21; response_revision:2579; }","duration":"161.63387ms","start":"2025-12-16T07:05:36.956349Z","end":"2025-12-16T07:05:37.117983Z","steps":["trace[185344251] 'agreement among raft nodes before linearized reading'  (duration: 161.538615ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.118106Z","caller":"traceutil/trace.go:172","msg":"trace[1006990986] range","detail":"{range_begin:/registry/persistentvolumes; range_end:; response_count:0; response_revision:2579; }","duration":"161.781088ms","start":"2025-12-16T07:05:36.956320Z","end":"2025-12-16T07:05:37.118101Z","steps":["trace[1006990986] 'agreement among raft nodes before linearized reading'  (duration: 161.7579ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.118199Z","caller":"traceutil/trace.go:172","msg":"trace[958088094] range","detail":"{range_begin:/registry/runtimeclasses/; range_end:/registry/runtimeclasses0; response_count:0; response_revision:2579; }","duration":"161.891234ms","start":"2025-12-16T07:05:36.956302Z","end":"2025-12-16T07:05:37.118194Z","steps":["trace[958088094] 'agreement among raft nodes before linearized reading'  (duration: 161.870491ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.118290Z","caller":"traceutil/trace.go:172","msg":"trace[450497122] range","detail":"{range_begin:/registry/certificatesigningrequests; range_end:; response_count:0; response_revision:2579; }","duration":"162.006707ms","start":"2025-12-16T07:05:36.956279Z","end":"2025-12-16T07:05:37.118286Z","steps":["trace[450497122] 'agreement among raft nodes before linearized reading'  (duration: 161.989566ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.118483Z","caller":"traceutil/trace.go:172","msg":"trace[1764111923] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:0; response_revision:2579; }","duration":"162.653352ms","start":"2025-12-16T07:05:36.955825Z","end":"2025-12-16T07:05:37.118478Z","steps":["trace[1764111923] 'agreement among raft nodes before linearized reading'  (duration: 162.534417ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.118605Z","caller":"traceutil/trace.go:172","msg":"trace[1832685940] range","detail":"{range_begin:/registry/events/default/ha-614518.1881a02a52654ab4; range_end:; response_count:1; response_revision:2579; }","duration":"162.798881ms","start":"2025-12-16T07:05:36.955802Z","end":"2025-12-16T07:05:37.118601Z","steps":["trace[1832685940] 'agreement among raft nodes before linearized reading'  (duration: 162.752414ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.118689Z","caller":"traceutil/trace.go:172","msg":"trace[1910732752] range","detail":"{range_begin:/registry/resourceslices; range_end:; response_count:0; response_revision:2579; }","duration":"164.621942ms","start":"2025-12-16T07:05:36.954063Z","end":"2025-12-16T07:05:37.118685Z","steps":["trace[1910732752] 'agreement among raft nodes before linearized reading'  (duration: 164.605434ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.118767Z","caller":"traceutil/trace.go:172","msg":"trace[1026138916] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2579; }","duration":"175.255622ms","start":"2025-12-16T07:05:36.943507Z","end":"2025-12-16T07:05:37.118763Z","steps":["trace[1026138916] 'agreement among raft nodes before linearized reading'  (duration: 175.242846ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.139792Z","caller":"traceutil/trace.go:172","msg":"trace[1783019215] transaction","detail":"{read_only:false; response_revision:2580; number_of_response:1; }","duration":"102.74853ms","start":"2025-12-16T07:05:37.037022Z","end":"2025-12-16T07:05:37.139771Z","steps":["trace[1783019215] 'process raft request'  (duration: 100.394402ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.179847Z","caller":"traceutil/trace.go:172","msg":"trace[311341052] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2582; }","duration":"143.874409ms","start":"2025-12-16T07:05:37.035961Z","end":"2025-12-16T07:05:37.179836Z","steps":["trace[311341052] 'agreement among raft nodes before linearized reading'  (duration: 139.025227ms)"],"step_count":1}
	
	
	==> kernel <==
	 07:10:14 up  9:52,  0 user,  load average: 0.40, 1.57, 1.57
	Linux ha-614518 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6e64e350bfcdb0ad3cefabf63e1a4acc10762dcf6c5cfb20629a03af5db77445] <==
	I1216 07:09:33.121926       1 main.go:324] Node ha-614518-m04 has CIDR [10.244.3.0/24] 
	I1216 07:09:43.121776       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 07:09:43.121809       1 main.go:301] handling current node
	I1216 07:09:43.121825       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1216 07:09:43.121831       1 main.go:324] Node ha-614518-m02 has CIDR [10.244.1.0/24] 
	I1216 07:09:43.122006       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1216 07:09:43.122019       1 main.go:324] Node ha-614518-m04 has CIDR [10.244.3.0/24] 
	I1216 07:09:53.121506       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1216 07:09:53.121631       1 main.go:324] Node ha-614518-m02 has CIDR [10.244.1.0/24] 
	I1216 07:09:53.121782       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1216 07:09:53.121798       1 main.go:324] Node ha-614518-m04 has CIDR [10.244.3.0/24] 
	I1216 07:09:53.121854       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 07:09:53.121866       1 main.go:301] handling current node
	I1216 07:10:03.121659       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 07:10:03.121690       1 main.go:301] handling current node
	I1216 07:10:03.121707       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1216 07:10:03.121714       1 main.go:324] Node ha-614518-m02 has CIDR [10.244.1.0/24] 
	I1216 07:10:03.121865       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1216 07:10:03.121880       1 main.go:324] Node ha-614518-m04 has CIDR [10.244.3.0/24] 
	I1216 07:10:13.120855       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1216 07:10:13.120895       1 main.go:324] Node ha-614518-m02 has CIDR [10.244.1.0/24] 
	I1216 07:10:13.121050       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1216 07:10:13.121069       1 main.go:324] Node ha-614518-m04 has CIDR [10.244.3.0/24] 
	I1216 07:10:13.121178       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 07:10:13.121192       1 main.go:301] handling current node
	
	
	==> kube-apiserver [db591d0d437f81b8c65552b6efbd2ca8fb29bb1e0989d62b2cce8be69b46105c] <==
	{"level":"warn","ts":"2025-12-16T07:05:36.946229Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40018af860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.946250Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40030372c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.946270Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40010305a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.946293Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001f4b680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.946313Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001cd7c20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.946331Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001f4a780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.946455Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40018ae3c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.946777Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001cd65a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953177Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40026a5680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953240Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400137ba40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953261Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001c85c20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953280Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001c84000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953301Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400274cb40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953323Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40018563c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953342Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40018574a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953358Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001cd65a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953376Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001856d20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953393Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001cd72c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953410Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40017050e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953428Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002be8f00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953452Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002213c20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953475Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400286b2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953492Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001f4a000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	I1216 07:05:52.809366       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	W1216 07:06:04.987898       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	
	
	==> kube-controller-manager [62f5148caf57328eb2231340bd1f0fda0819319965c786abfdb83aeb5ed01f5e] <==
	I1216 07:06:32.762952       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1216 07:06:32.762965       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1216 07:06:32.763103       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1216 07:06:32.763212       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-614518"
	I1216 07:06:32.763278       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-614518-m02"
	I1216 07:06:32.763330       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-614518-m04"
	I1216 07:06:32.763013       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-614518-m04"
	I1216 07:06:32.763628       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1216 07:06:32.767053       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 07:06:32.771968       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 07:06:32.772032       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1216 07:06:32.772041       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1216 07:06:32.772555       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1216 07:06:32.776046       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1216 07:06:32.776052       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1216 07:06:32.786517       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1216 07:06:32.786581       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1216 07:06:32.786621       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1216 07:06:32.786648       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1216 07:06:32.786659       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1216 07:06:32.786673       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1216 07:06:32.798877       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 07:06:32.805236       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1216 07:06:32.809401       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1216 07:06:32.814713       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	
	
	==> kube-controller-manager [95092e298b4a275cf751be03abcd8305d183bb3b40e3bc28150dc77bb5adf478] <==
	I1216 07:05:28.645623       1 serving.go:386] Generated self-signed cert in-memory
	I1216 07:05:29.867986       1 controllermanager.go:191] "Starting" version="v1.34.2"
	I1216 07:05:29.868027       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 07:05:29.870775       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1216 07:05:29.870890       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1216 07:05:29.871397       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1216 07:05:29.871485       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1216 07:05:41.889989       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-proxy [df7febb900c92c1ec552f11013f0ffc72f6a301ff2a34356063a3a3d5508e6f6] <==
	E1216 07:04:12.258444       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-614518&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1216 07:04:21.248820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-614518&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1216 07:04:33.125461       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-614518&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1216 07:04:58.756889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-614518&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1216 07:05:31.552851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-614518&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1216 07:06:20.432877       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 07:06:20.432912       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1216 07:06:20.432992       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 07:06:20.451656       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 07:06:20.451712       1 server_linux.go:132] "Using iptables Proxier"
	I1216 07:06:20.455545       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 07:06:20.455867       1 server.go:527] "Version info" version="v1.34.2"
	I1216 07:06:20.455889       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 07:06:20.457590       1 config.go:200] "Starting service config controller"
	I1216 07:06:20.457611       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 07:06:20.457630       1 config.go:106] "Starting endpoint slice config controller"
	I1216 07:06:20.457635       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 07:06:20.457646       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 07:06:20.457649       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 07:06:20.458370       1 config.go:309] "Starting node config controller"
	I1216 07:06:20.458392       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 07:06:20.458399       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 07:06:20.558370       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1216 07:06:20.558388       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 07:06:20.558421       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c0e9d15ebb1cd884461c491d76b9c135253b28403f1a18a97c1bdb68443fe858] <==
	E1216 07:03:59.819964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1216 07:03:59.820147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1216 07:03:59.820256       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1216 07:03:59.820687       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1216 07:03:59.824783       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1216 07:04:00.679883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1216 07:04:00.726192       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1216 07:04:00.776690       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1216 07:04:00.841266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1216 07:04:00.859797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1216 07:04:00.879356       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1216 07:04:00.886912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1216 07:04:00.919634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1216 07:04:00.958908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1216 07:04:00.959058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1216 07:04:01.001026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1216 07:04:01.006661       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1216 07:04:01.010174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1216 07:04:01.037770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1216 07:04:01.074332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1216 07:04:01.101325       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1216 07:04:01.113105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1216 07:04:01.257180       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1216 07:04:01.380284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1216 07:04:04.392043       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 16 07:05:41 ha-614518 kubelet[805]: E1216 07:05:41.963351     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-614518_kube-system(1520b3299dadf726cb27cf58cec25cd2)\"" pod="kube-system/kube-controller-manager-ha-614518" podUID="1520b3299dadf726cb27cf58cec25cd2"
	Dec 16 07:05:43 ha-614518 kubelet[805]: I1216 07:05:43.172588     805 scope.go:117] "RemoveContainer" containerID="95092e298b4a275cf751be03abcd8305d183bb3b40e3bc28150dc77bb5adf478"
	Dec 16 07:05:43 ha-614518 kubelet[805]: E1216 07:05:43.173239     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-614518_kube-system(1520b3299dadf726cb27cf58cec25cd2)\"" pod="kube-system/kube-controller-manager-ha-614518" podUID="1520b3299dadf726cb27cf58cec25cd2"
	Dec 16 07:05:47 ha-614518 kubelet[805]: I1216 07:05:47.980385     805 scope.go:117] "RemoveContainer" containerID="1b90f35e8fe79482d5c14218f1e2e65c47d65394a6eeb0612fbb2b19206d27c7"
	Dec 16 07:05:47 ha-614518 kubelet[805]: I1216 07:05:47.980741     805 scope.go:117] "RemoveContainer" containerID="4b611d8c213d6b291fb7a3b72450bf97b5b458e31038413638c4e1e9a6beaaf7"
	Dec 16 07:05:47 ha-614518 kubelet[805]: E1216 07:05:47.980882     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c8b9c00b-10bc-423c-b16e-3f3cdb12e907)\"" pod="kube-system/storage-provisioner" podUID="c8b9c00b-10bc-423c-b16e-3f3cdb12e907"
	Dec 16 07:05:51 ha-614518 kubelet[805]: I1216 07:05:51.159869     805 scope.go:117] "RemoveContainer" containerID="95092e298b4a275cf751be03abcd8305d183bb3b40e3bc28150dc77bb5adf478"
	Dec 16 07:05:51 ha-614518 kubelet[805]: E1216 07:05:51.160099     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-614518_kube-system(1520b3299dadf726cb27cf58cec25cd2)\"" pod="kube-system/kube-controller-manager-ha-614518" podUID="1520b3299dadf726cb27cf58cec25cd2"
	Dec 16 07:05:59 ha-614518 kubelet[805]: I1216 07:05:59.515424     805 scope.go:117] "RemoveContainer" containerID="4b611d8c213d6b291fb7a3b72450bf97b5b458e31038413638c4e1e9a6beaaf7"
	Dec 16 07:05:59 ha-614518 kubelet[805]: E1216 07:05:59.516065     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c8b9c00b-10bc-423c-b16e-3f3cdb12e907)\"" pod="kube-system/storage-provisioner" podUID="c8b9c00b-10bc-423c-b16e-3f3cdb12e907"
	Dec 16 07:06:02 ha-614518 kubelet[805]: I1216 07:06:02.518762     805 scope.go:117] "RemoveContainer" containerID="95092e298b4a275cf751be03abcd8305d183bb3b40e3bc28150dc77bb5adf478"
	Dec 16 07:06:02 ha-614518 kubelet[805]: E1216 07:06:02.519407     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-614518_kube-system(1520b3299dadf726cb27cf58cec25cd2)\"" pod="kube-system/kube-controller-manager-ha-614518" podUID="1520b3299dadf726cb27cf58cec25cd2"
	Dec 16 07:06:10 ha-614518 kubelet[805]: I1216 07:06:10.515860     805 scope.go:117] "RemoveContainer" containerID="4b611d8c213d6b291fb7a3b72450bf97b5b458e31038413638c4e1e9a6beaaf7"
	Dec 16 07:06:16 ha-614518 kubelet[805]: I1216 07:06:16.515241     805 scope.go:117] "RemoveContainer" containerID="95092e298b4a275cf751be03abcd8305d183bb3b40e3bc28150dc77bb5adf478"
	Dec 16 07:06:16 ha-614518 kubelet[805]: E1216 07:06:16.515866     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-614518_kube-system(1520b3299dadf726cb27cf58cec25cd2)\"" pod="kube-system/kube-controller-manager-ha-614518" podUID="1520b3299dadf726cb27cf58cec25cd2"
	Dec 16 07:06:28 ha-614518 kubelet[805]: I1216 07:06:28.515045     805 scope.go:117] "RemoveContainer" containerID="95092e298b4a275cf751be03abcd8305d183bb3b40e3bc28150dc77bb5adf478"
	Dec 16 07:06:41 ha-614518 kubelet[805]: I1216 07:06:41.132515     805 scope.go:117] "RemoveContainer" containerID="4b611d8c213d6b291fb7a3b72450bf97b5b458e31038413638c4e1e9a6beaaf7"
	Dec 16 07:06:41 ha-614518 kubelet[805]: I1216 07:06:41.132828     805 scope.go:117] "RemoveContainer" containerID="5fb83a33391310c66121eddbbc2402a4ccfa716619e5d2b9a5e8333c2cbde2fa"
	Dec 16 07:06:41 ha-614518 kubelet[805]: E1216 07:06:41.132959     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c8b9c00b-10bc-423c-b16e-3f3cdb12e907)\"" pod="kube-system/storage-provisioner" podUID="c8b9c00b-10bc-423c-b16e-3f3cdb12e907"
	Dec 16 07:06:52 ha-614518 kubelet[805]: E1216 07:06:52.544364     805 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/1e1bffb0be7696eafc690b57ae72d068d188db906113cb72328c74f36504929d/diff" to get inode usage: stat /var/lib/containers/storage/overlay/1e1bffb0be7696eafc690b57ae72d068d188db906113cb72328c74f36504929d/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_storage-provisioner_c8b9c00b-10bc-423c-b16e-3f3cdb12e907/storage-provisioner/5.log" to get inode usage: stat /var/log/pods/kube-system_storage-provisioner_c8b9c00b-10bc-423c-b16e-3f3cdb12e907/storage-provisioner/5.log: no such file or directory
	Dec 16 07:06:56 ha-614518 kubelet[805]: I1216 07:06:56.515333     805 scope.go:117] "RemoveContainer" containerID="5fb83a33391310c66121eddbbc2402a4ccfa716619e5d2b9a5e8333c2cbde2fa"
	Dec 16 07:06:56 ha-614518 kubelet[805]: E1216 07:06:56.515497     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c8b9c00b-10bc-423c-b16e-3f3cdb12e907)\"" pod="kube-system/storage-provisioner" podUID="c8b9c00b-10bc-423c-b16e-3f3cdb12e907"
	Dec 16 07:07:08 ha-614518 kubelet[805]: I1216 07:07:08.515499     805 scope.go:117] "RemoveContainer" containerID="5fb83a33391310c66121eddbbc2402a4ccfa716619e5d2b9a5e8333c2cbde2fa"
	Dec 16 07:07:08 ha-614518 kubelet[805]: E1216 07:07:08.515687     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c8b9c00b-10bc-423c-b16e-3f3cdb12e907)\"" pod="kube-system/storage-provisioner" podUID="c8b9c00b-10bc-423c-b16e-3f3cdb12e907"
	Dec 16 07:07:21 ha-614518 kubelet[805]: I1216 07:07:21.515729     805 scope.go:117] "RemoveContainer" containerID="5fb83a33391310c66121eddbbc2402a4ccfa716619e5d2b9a5e8333c2cbde2fa"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-614518 -n ha-614518
helpers_test.go:270: (dbg) Run:  kubectl --context ha-614518 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (391.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (3.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:415: expected profile "ha-614518" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-614518\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-614518\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.2\",\"ClusterName\":\"ha-614518\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{
\"Name\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"reg
istry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticI
P\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect ha-614518
helpers_test.go:244: (dbg) docker inspect ha-614518:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e2503ac81b82256526f5aa49d6145c5c534bc177f13530507608bbd038a0fb46",
	        "Created": "2025-12-16T06:55:15.920807949Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1687611,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T07:03:45.310819447Z",
	            "FinishedAt": "2025-12-16T07:03:44.437347575Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/e2503ac81b82256526f5aa49d6145c5c534bc177f13530507608bbd038a0fb46/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e2503ac81b82256526f5aa49d6145c5c534bc177f13530507608bbd038a0fb46/hostname",
	        "HostsPath": "/var/lib/docker/containers/e2503ac81b82256526f5aa49d6145c5c534bc177f13530507608bbd038a0fb46/hosts",
	        "LogPath": "/var/lib/docker/containers/e2503ac81b82256526f5aa49d6145c5c534bc177f13530507608bbd038a0fb46/e2503ac81b82256526f5aa49d6145c5c534bc177f13530507608bbd038a0fb46-json.log",
	        "Name": "/ha-614518",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-614518:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-614518",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e2503ac81b82256526f5aa49d6145c5c534bc177f13530507608bbd038a0fb46",
	                "LowerDir": "/var/lib/docker/overlay2/04f114c45138ebdd19c57b7c35226a13895bf218ac7fbb3e830bb8c8d7681245-init/diff:/var/lib/docker/overlay2/bf9e5e3f04a34ae52d17b5e81aeacb3854428b2bda7b4fcb7e1d86558db759ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/04f114c45138ebdd19c57b7c35226a13895bf218ac7fbb3e830bb8c8d7681245/merged",
	                "UpperDir": "/var/lib/docker/overlay2/04f114c45138ebdd19c57b7c35226a13895bf218ac7fbb3e830bb8c8d7681245/diff",
	                "WorkDir": "/var/lib/docker/overlay2/04f114c45138ebdd19c57b7c35226a13895bf218ac7fbb3e830bb8c8d7681245/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-614518",
	                "Source": "/var/lib/docker/volumes/ha-614518/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-614518",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-614518",
	                "name.minikube.sigs.k8s.io": "ha-614518",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "84d9c6998ba47bdb877c4913d6988c8320c2f46bb6d33489550ea4eb54ae2b9c",
	            "SandboxKey": "/var/run/docker/netns/84d9c6998ba4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34310"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34311"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34314"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34312"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34313"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-614518": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:8c:71:16:ba:ca",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "34c8049a560aca568d8e67043aef245d26603d1e6b5021bc9413fe96f5cfa4f6",
	                    "EndpointID": "128f0ab3a1ff878dc623fde0aadf19698e2b387b41dbec7082d4a76b9a429095",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-614518",
	                        "e2503ac81b82"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-614518 -n ha-614518
helpers_test.go:253: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p ha-614518 logs -n 25: (1.610124746s)
helpers_test.go:261: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-614518 cp ha-614518-m03:/home/docker/cp-test.txt ha-614518-m04:/home/docker/cp-test_ha-614518-m03_ha-614518-m04.txt               │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 06:59 UTC │ 16 Dec 25 06:59 UTC │
	│ ssh     │ ha-614518 ssh -n ha-614518-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 06:59 UTC │ 16 Dec 25 06:59 UTC │
	│ ssh     │ ha-614518 ssh -n ha-614518-m04 sudo cat /home/docker/cp-test_ha-614518-m03_ha-614518-m04.txt                                         │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 06:59 UTC │ 16 Dec 25 06:59 UTC │
	│ cp      │ ha-614518 cp testdata/cp-test.txt ha-614518-m04:/home/docker/cp-test.txt                                                             │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 06:59 UTC │ 16 Dec 25 06:59 UTC │
	│ ssh     │ ha-614518 ssh -n ha-614518-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 06:59 UTC │ 16 Dec 25 06:59 UTC │
	│ cp      │ ha-614518 cp ha-614518-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1403810740/001/cp-test_ha-614518-m04.txt │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 06:59 UTC │ 16 Dec 25 06:59 UTC │
	│ ssh     │ ha-614518 ssh -n ha-614518-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 06:59 UTC │ 16 Dec 25 06:59 UTC │
	│ cp      │ ha-614518 cp ha-614518-m04:/home/docker/cp-test.txt ha-614518:/home/docker/cp-test_ha-614518-m04_ha-614518.txt                       │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 06:59 UTC │ 16 Dec 25 06:59 UTC │
	│ ssh     │ ha-614518 ssh -n ha-614518-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 06:59 UTC │ 16 Dec 25 06:59 UTC │
	│ ssh     │ ha-614518 ssh -n ha-614518 sudo cat /home/docker/cp-test_ha-614518-m04_ha-614518.txt                                                 │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 06:59 UTC │ 16 Dec 25 06:59 UTC │
	│ cp      │ ha-614518 cp ha-614518-m04:/home/docker/cp-test.txt ha-614518-m02:/home/docker/cp-test_ha-614518-m04_ha-614518-m02.txt               │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 06:59 UTC │ 16 Dec 25 07:00 UTC │
	│ ssh     │ ha-614518 ssh -n ha-614518-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:00 UTC │ 16 Dec 25 07:00 UTC │
	│ ssh     │ ha-614518 ssh -n ha-614518-m02 sudo cat /home/docker/cp-test_ha-614518-m04_ha-614518-m02.txt                                         │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:00 UTC │ 16 Dec 25 07:00 UTC │
	│ cp      │ ha-614518 cp ha-614518-m04:/home/docker/cp-test.txt ha-614518-m03:/home/docker/cp-test_ha-614518-m04_ha-614518-m03.txt               │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:00 UTC │ 16 Dec 25 07:00 UTC │
	│ ssh     │ ha-614518 ssh -n ha-614518-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:00 UTC │ 16 Dec 25 07:00 UTC │
	│ ssh     │ ha-614518 ssh -n ha-614518-m03 sudo cat /home/docker/cp-test_ha-614518-m04_ha-614518-m03.txt                                         │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:00 UTC │ 16 Dec 25 07:00 UTC │
	│ node    │ ha-614518 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:00 UTC │ 16 Dec 25 07:00 UTC │
	│ node    │ ha-614518 node start m02 --alsologtostderr -v 5                                                                                      │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:00 UTC │ 16 Dec 25 07:00 UTC │
	│ node    │ ha-614518 node list --alsologtostderr -v 5                                                                                           │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:00 UTC │                     │
	│ stop    │ ha-614518 stop --alsologtostderr -v 5                                                                                                │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:00 UTC │ 16 Dec 25 07:01 UTC │
	│ start   │ ha-614518 start --wait true --alsologtostderr -v 5                                                                                   │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:01 UTC │ 16 Dec 25 07:02 UTC │
	│ node    │ ha-614518 node list --alsologtostderr -v 5                                                                                           │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:02 UTC │                     │
	│ node    │ ha-614518 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:02 UTC │ 16 Dec 25 07:03 UTC │
	│ stop    │ ha-614518 stop --alsologtostderr -v 5                                                                                                │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:03 UTC │ 16 Dec 25 07:03 UTC │
	│ start   │ ha-614518 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:03 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 07:03:44
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 07:03:44.880217 1687487 out.go:360] Setting OutFile to fd 1 ...
	I1216 07:03:44.880366 1687487 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 07:03:44.880378 1687487 out.go:374] Setting ErrFile to fd 2...
	I1216 07:03:44.880384 1687487 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 07:03:44.880665 1687487 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 07:03:44.881079 1687487 out.go:368] Setting JSON to false
	I1216 07:03:44.882032 1687487 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":35176,"bootTime":1765833449,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1216 07:03:44.882105 1687487 start.go:143] virtualization:  
	I1216 07:03:44.885307 1687487 out.go:179] * [ha-614518] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 07:03:44.889019 1687487 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 07:03:44.889105 1687487 notify.go:221] Checking for updates...
	I1216 07:03:44.894878 1687487 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 07:03:44.897985 1687487 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 07:03:44.900761 1687487 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube
	I1216 07:03:44.903578 1687487 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 07:03:44.906467 1687487 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 07:03:44.909985 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:03:44.910567 1687487 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 07:03:44.945233 1687487 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 07:03:44.945374 1687487 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 07:03:45.031657 1687487 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-16 07:03:45.011244188 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 07:03:45.031829 1687487 docker.go:319] overlay module found
	I1216 07:03:45.037435 1687487 out.go:179] * Using the docker driver based on existing profile
	I1216 07:03:45.040996 1687487 start.go:309] selected driver: docker
	I1216 07:03:45.041023 1687487 start.go:927] validating driver "docker" against &{Name:ha-614518 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-614518 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 07:03:45.041175 1687487 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 07:03:45.041288 1687487 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 07:03:45.134661 1687487 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-16 07:03:45.119026433 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 07:03:45.135091 1687487 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 07:03:45.135120 1687487 cni.go:84] Creating CNI manager for ""
	I1216 07:03:45.135176 1687487 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1216 07:03:45.135234 1687487 start.go:353] cluster config:
	{Name:ha-614518 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-614518 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 07:03:45.149972 1687487 out.go:179] * Starting "ha-614518" primary control-plane node in "ha-614518" cluster
	I1216 07:03:45.153136 1687487 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 07:03:45.159266 1687487 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 07:03:45.170928 1687487 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 07:03:45.170953 1687487 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 07:03:45.171004 1687487 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1216 07:03:45.171018 1687487 cache.go:65] Caching tarball of preloaded images
	I1216 07:03:45.171117 1687487 preload.go:238] Found /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 07:03:45.171128 1687487 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 07:03:45.171285 1687487 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/config.json ...
	I1216 07:03:45.215544 1687487 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 07:03:45.215626 1687487 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 07:03:45.215662 1687487 cache.go:243] Successfully downloaded all kic artifacts
	I1216 07:03:45.215843 1687487 start.go:360] acquireMachinesLock for ha-614518: {Name:mk3b1063af1f3d64814d71b86469148e674fab2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 07:03:45.216121 1687487 start.go:364] duration metric: took 138.127µs to acquireMachinesLock for "ha-614518"
	I1216 07:03:45.216289 1687487 start.go:96] Skipping create...Using existing machine configuration
	I1216 07:03:45.216367 1687487 fix.go:54] fixHost starting: 
	I1216 07:03:45.217861 1687487 cli_runner.go:164] Run: docker container inspect ha-614518 --format={{.State.Status}}
	I1216 07:03:45.257760 1687487 fix.go:112] recreateIfNeeded on ha-614518: state=Stopped err=<nil>
	W1216 07:03:45.257825 1687487 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 07:03:45.263736 1687487 out.go:252] * Restarting existing docker container for "ha-614518" ...
	I1216 07:03:45.263878 1687487 cli_runner.go:164] Run: docker start ha-614518
	I1216 07:03:45.543794 1687487 cli_runner.go:164] Run: docker container inspect ha-614518 --format={{.State.Status}}
	I1216 07:03:45.563314 1687487 kic.go:430] container "ha-614518" state is running.
	I1216 07:03:45.563689 1687487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614518
	I1216 07:03:45.584894 1687487 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/config.json ...
	I1216 07:03:45.585139 1687487 machine.go:94] provisionDockerMachine start ...
	I1216 07:03:45.585210 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518
	I1216 07:03:45.605415 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:03:45.606022 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34310 <nil> <nil>}
	I1216 07:03:45.606037 1687487 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 07:03:45.607343 1687487 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36692->127.0.0.1:34310: read: connection reset by peer
	I1216 07:03:48.740166 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-614518
	
	I1216 07:03:48.740200 1687487 ubuntu.go:182] provisioning hostname "ha-614518"
	I1216 07:03:48.740337 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518
	I1216 07:03:48.763945 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:03:48.764266 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34310 <nil> <nil>}
	I1216 07:03:48.764282 1687487 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-614518 && echo "ha-614518" | sudo tee /etc/hostname
	I1216 07:03:48.905449 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-614518
	
	I1216 07:03:48.905536 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518
	I1216 07:03:48.922159 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:03:48.922475 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34310 <nil> <nil>}
	I1216 07:03:48.922498 1687487 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-614518' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-614518/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-614518' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 07:03:49.056835 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 07:03:49.056862 1687487 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-1596013/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-1596013/.minikube}
	I1216 07:03:49.056897 1687487 ubuntu.go:190] setting up certificates
	I1216 07:03:49.056913 1687487 provision.go:84] configureAuth start
	I1216 07:03:49.056990 1687487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614518
	I1216 07:03:49.074475 1687487 provision.go:143] copyHostCerts
	I1216 07:03:49.074521 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 07:03:49.074564 1687487 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem, removing ...
	I1216 07:03:49.074584 1687487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 07:03:49.074664 1687487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem (1078 bytes)
	I1216 07:03:49.074753 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 07:03:49.074776 1687487 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem, removing ...
	I1216 07:03:49.074785 1687487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 07:03:49.074812 1687487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem (1123 bytes)
	I1216 07:03:49.074873 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 07:03:49.074892 1687487 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem, removing ...
	I1216 07:03:49.074902 1687487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 07:03:49.074929 1687487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem (1675 bytes)
	I1216 07:03:49.074985 1687487 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem org=jenkins.ha-614518 san=[127.0.0.1 192.168.49.2 ha-614518 localhost minikube]
	I1216 07:03:49.677070 1687487 provision.go:177] copyRemoteCerts
	I1216 07:03:49.677146 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 07:03:49.677189 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518
	I1216 07:03:49.696012 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34310 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518/id_rsa Username:docker}
	I1216 07:03:49.796234 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1216 07:03:49.796294 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 07:03:49.813987 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1216 07:03:49.814051 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1216 07:03:49.832994 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1216 07:03:49.833117 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 07:03:49.852358 1687487 provision.go:87] duration metric: took 795.417685ms to configureAuth
	I1216 07:03:49.852395 1687487 ubuntu.go:206] setting minikube options for container-runtime
	I1216 07:03:49.852668 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:03:49.852778 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518
	I1216 07:03:49.870814 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:03:49.871144 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34310 <nil> <nil>}
	I1216 07:03:49.871168 1687487 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 07:03:50.263536 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 07:03:50.263563 1687487 machine.go:97] duration metric: took 4.678406656s to provisionDockerMachine
	I1216 07:03:50.263587 1687487 start.go:293] postStartSetup for "ha-614518" (driver="docker")
	I1216 07:03:50.263599 1687487 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 07:03:50.263688 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 07:03:50.263741 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518
	I1216 07:03:50.288161 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34310 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518/id_rsa Username:docker}
	I1216 07:03:50.388424 1687487 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 07:03:50.391627 1687487 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 07:03:50.391661 1687487 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 07:03:50.391673 1687487 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/addons for local assets ...
	I1216 07:03:50.391729 1687487 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/files for local assets ...
	I1216 07:03:50.391823 1687487 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> 15992552.pem in /etc/ssl/certs
	I1216 07:03:50.391835 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> /etc/ssl/certs/15992552.pem
	I1216 07:03:50.391942 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 07:03:50.399136 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 07:03:50.417106 1687487 start.go:296] duration metric: took 153.503323ms for postStartSetup
	I1216 07:03:50.417188 1687487 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 07:03:50.417231 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518
	I1216 07:03:50.433965 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34310 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518/id_rsa Username:docker}
	I1216 07:03:50.525944 1687487 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 07:03:50.531286 1687487 fix.go:56] duration metric: took 5.314914646s for fixHost
	I1216 07:03:50.531388 1687487 start.go:83] releasing machines lock for "ha-614518", held for 5.315142989s
	I1216 07:03:50.531501 1687487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614518
	I1216 07:03:50.548584 1687487 ssh_runner.go:195] Run: cat /version.json
	I1216 07:03:50.548651 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518
	I1216 07:03:50.548722 1687487 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 07:03:50.548786 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518
	I1216 07:03:50.573896 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34310 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518/id_rsa Username:docker}
	I1216 07:03:50.582211 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34310 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518/id_rsa Username:docker}
	I1216 07:03:50.773920 1687487 ssh_runner.go:195] Run: systemctl --version
	I1216 07:03:50.780399 1687487 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 07:03:50.815666 1687487 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 07:03:50.820120 1687487 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 07:03:50.820193 1687487 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 07:03:50.828039 1687487 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 07:03:50.828121 1687487 start.go:496] detecting cgroup driver to use...
	I1216 07:03:50.828169 1687487 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 07:03:50.828249 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 07:03:50.844121 1687487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 07:03:50.857243 1687487 docker.go:218] disabling cri-docker service (if available) ...
	I1216 07:03:50.857381 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 07:03:50.873095 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 07:03:50.886187 1687487 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 07:03:51.006275 1687487 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 07:03:51.140914 1687487 docker.go:234] disabling docker service ...
	I1216 07:03:51.140991 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 07:03:51.157238 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 07:03:51.171898 1687487 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 07:03:51.287675 1687487 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 07:03:51.421310 1687487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 07:03:51.434905 1687487 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 07:03:51.449226 1687487 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 07:03:51.449297 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:03:51.458120 1687487 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 07:03:51.458190 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:03:51.467336 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:03:51.476031 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:03:51.484943 1687487 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 07:03:51.493309 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:03:51.502592 1687487 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:03:51.511462 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:03:51.520904 1687487 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 07:03:51.528691 1687487 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 07:03:51.536073 1687487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 07:03:51.644582 1687487 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 07:03:51.813587 1687487 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 07:03:51.813682 1687487 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 07:03:51.818257 1687487 start.go:564] Will wait 60s for crictl version
	I1216 07:03:51.818378 1687487 ssh_runner.go:195] Run: which crictl
	I1216 07:03:51.822136 1687487 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 07:03:51.848811 1687487 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 07:03:51.848971 1687487 ssh_runner.go:195] Run: crio --version
	I1216 07:03:51.877270 1687487 ssh_runner.go:195] Run: crio --version
	I1216 07:03:51.911920 1687487 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 07:03:51.914805 1687487 cli_runner.go:164] Run: docker network inspect ha-614518 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 07:03:51.931261 1687487 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 07:03:51.935082 1687487 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 07:03:51.945205 1687487 kubeadm.go:884] updating cluster {Name:ha-614518 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-614518 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 07:03:51.945357 1687487 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 07:03:51.945422 1687487 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 07:03:51.979077 1687487 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 07:03:51.979106 1687487 crio.go:433] Images already preloaded, skipping extraction
	I1216 07:03:51.979163 1687487 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 07:03:52.008543 1687487 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 07:03:52.008569 1687487 cache_images.go:86] Images are preloaded, skipping loading
	I1216 07:03:52.008578 1687487 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1216 07:03:52.008687 1687487 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-614518 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-614518 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 07:03:52.008783 1687487 ssh_runner.go:195] Run: crio config
	I1216 07:03:52.064647 1687487 cni.go:84] Creating CNI manager for ""
	I1216 07:03:52.064671 1687487 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1216 07:03:52.064694 1687487 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 07:03:52.064717 1687487 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-614518 NodeName:ha-614518 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 07:03:52.064852 1687487 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-614518"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 07:03:52.064876 1687487 kube-vip.go:115] generating kube-vip config ...
	I1216 07:03:52.064936 1687487 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1216 07:03:52.077257 1687487 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1216 07:03:52.077367 1687487 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1216 07:03:52.077440 1687487 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 07:03:52.085615 1687487 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 07:03:52.085717 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1216 07:03:52.093632 1687487 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1216 07:03:52.107221 1687487 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 07:03:52.120189 1687487 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1216 07:03:52.132971 1687487 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1216 07:03:52.145766 1687487 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1216 07:03:52.149312 1687487 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 07:03:52.158923 1687487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 07:03:52.283710 1687487 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 07:03:52.301582 1687487 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518 for IP: 192.168.49.2
	I1216 07:03:52.301603 1687487 certs.go:195] generating shared ca certs ...
	I1216 07:03:52.301620 1687487 certs.go:227] acquiring lock for ca certs: {Name:mkbf72d2e438185e2867d262e148d82e5455cccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:03:52.301773 1687487 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key
	I1216 07:03:52.301822 1687487 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key
	I1216 07:03:52.301833 1687487 certs.go:257] generating profile certs ...
	I1216 07:03:52.301907 1687487 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/client.key
	I1216 07:03:52.301945 1687487 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.key.d39b37a1
	I1216 07:03:52.301963 1687487 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.crt.d39b37a1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1216 07:03:52.415504 1687487 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.crt.d39b37a1 ...
	I1216 07:03:52.415537 1687487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.crt.d39b37a1: {Name:mk670a19d587f16baf0df889e9e917056f8f5261 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:03:52.415731 1687487 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.key.d39b37a1 ...
	I1216 07:03:52.415747 1687487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.key.d39b37a1: {Name:mk54bea57dae6ed1500bec8bfd5028c4fbd13a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:03:52.415839 1687487 certs.go:382] copying /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.crt.d39b37a1 -> /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.crt
	I1216 07:03:52.415977 1687487 certs.go:386] copying /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.key.d39b37a1 -> /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.key
	I1216 07:03:52.416116 1687487 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/proxy-client.key
	I1216 07:03:52.416135 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1216 07:03:52.416152 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1216 07:03:52.416168 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1216 07:03:52.416186 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1216 07:03:52.416197 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1216 07:03:52.416215 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1216 07:03:52.416235 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1216 07:03:52.416253 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1216 07:03:52.416304 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem (1338 bytes)
	W1216 07:03:52.416340 1687487 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255_empty.pem, impossibly tiny 0 bytes
	I1216 07:03:52.416355 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 07:03:52.416384 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem (1078 bytes)
	I1216 07:03:52.416413 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem (1123 bytes)
	I1216 07:03:52.416440 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem (1675 bytes)
	I1216 07:03:52.416515 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 07:03:52.416550 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:03:52.416569 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem -> /usr/share/ca-certificates/1599255.pem
	I1216 07:03:52.416583 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> /usr/share/ca-certificates/15992552.pem
	I1216 07:03:52.417145 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 07:03:52.438246 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 07:03:52.458550 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 07:03:52.483806 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 07:03:52.504536 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1216 07:03:52.531165 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 07:03:52.551893 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 07:03:52.571589 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 07:03:52.590649 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 07:03:52.610138 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem --> /usr/share/ca-certificates/1599255.pem (1338 bytes)
	I1216 07:03:52.630965 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /usr/share/ca-certificates/15992552.pem (1708 bytes)
	I1216 07:03:52.650790 1687487 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 07:03:52.664186 1687487 ssh_runner.go:195] Run: openssl version
	I1216 07:03:52.671337 1687487 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:03:52.678844 1687487 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 07:03:52.686401 1687487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:03:52.690368 1687487 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:03:52.690436 1687487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:03:52.731470 1687487 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 07:03:52.738706 1687487 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1599255.pem
	I1216 07:03:52.745967 1687487 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1599255.pem /etc/ssl/certs/1599255.pem
	I1216 07:03:52.753284 1687487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1599255.pem
	I1216 07:03:52.757015 1687487 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 06:24 /usr/share/ca-certificates/1599255.pem
	I1216 07:03:52.757119 1687487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1599255.pem
	I1216 07:03:52.798254 1687487 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 07:03:52.805456 1687487 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15992552.pem
	I1216 07:03:52.812464 1687487 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15992552.pem /etc/ssl/certs/15992552.pem
	I1216 07:03:52.820202 1687487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15992552.pem
	I1216 07:03:52.823851 1687487 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 06:24 /usr/share/ca-certificates/15992552.pem
	I1216 07:03:52.823958 1687487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15992552.pem
	I1216 07:03:52.864891 1687487 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 07:03:52.872666 1687487 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 07:03:52.876565 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 07:03:52.917593 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 07:03:52.962371 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 07:03:53.011634 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 07:03:53.070012 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 07:03:53.127584 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 07:03:53.215856 1687487 kubeadm.go:401] StartCluster: {Name:ha-614518 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-614518 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 07:03:53.216035 1687487 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 07:03:53.216134 1687487 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 07:03:53.263680 1687487 cri.go:89] found id: "11e4b44d62d5436a07f6d8edd733f4092c09af04d3fa6130a9ee2d504c2d7b92"
	I1216 07:03:53.263744 1687487 cri.go:89] found id: "69514719ce90eebffbe68b0ace74e14259ceea7c07980c6918b6af6e8b91ba10"
	I1216 07:03:53.263764 1687487 cri.go:89] found id: "b6e4d702970e634028ab9da9ca8e258d02bb0aa908a74a428d72bd35cdec320d"
	I1216 07:03:53.263787 1687487 cri.go:89] found id: "c0e9d15ebb1cd884461c491d76b9c135253b28403f1a18a97c1bdb68443fe858"
	I1216 07:03:53.263822 1687487 cri.go:89] found id: "db591d0d437f81b8c65552b6efbd2ca8fb29bb1e0989d62b2cce8be69b46105c"
	I1216 07:03:53.263846 1687487 cri.go:89] found id: ""
	I1216 07:03:53.263924 1687487 ssh_runner.go:195] Run: sudo runc list -f json
	W1216 07:03:53.279629 1687487 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T07:03:53Z" level=error msg="open /run/runc: no such file or directory"
	I1216 07:03:53.279752 1687487 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 07:03:53.291564 1687487 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 07:03:53.291626 1687487 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 07:03:53.291717 1687487 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 07:03:53.306008 1687487 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 07:03:53.306492 1687487 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-614518" does not appear in /home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 07:03:53.306648 1687487 kubeconfig.go:62] /home/jenkins/minikube-integration/22141-1596013/kubeconfig needs updating (will repair): [kubeconfig missing "ha-614518" cluster setting kubeconfig missing "ha-614518" context setting]
	I1216 07:03:53.306941 1687487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/kubeconfig: {Name:mk61a8e87d869d27c5acc78145bae6b02a8088a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:03:53.307502 1687487 kapi.go:59] client config for ha-614518: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/client.crt", KeyFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/client.key", CAFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 07:03:53.308322 1687487 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1216 07:03:53.308427 1687487 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 07:03:53.308488 1687487 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1216 07:03:53.308515 1687487 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1216 07:03:53.308406 1687487 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1216 07:03:53.308623 1687487 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 07:03:53.308936 1687487 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 07:03:53.317737 1687487 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1216 07:03:53.317797 1687487 kubeadm.go:602] duration metric: took 26.14434ms to restartPrimaryControlPlane
	I1216 07:03:53.317823 1687487 kubeadm.go:403] duration metric: took 101.97493ms to StartCluster
	I1216 07:03:53.317854 1687487 settings.go:142] acquiring lock: {Name:mk011eec7aa10b3db81dce3dc7edf51f985e2ce2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:03:53.317948 1687487 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 07:03:53.318556 1687487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/kubeconfig: {Name:mk61a8e87d869d27c5acc78145bae6b02a8088a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:03:53.318810 1687487 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 07:03:53.318859 1687487 start.go:242] waiting for startup goroutines ...
	I1216 07:03:53.318894 1687487 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 07:03:53.319377 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:03:53.323257 1687487 out.go:179] * Enabled addons: 
	I1216 07:03:53.326246 1687487 addons.go:530] duration metric: took 7.35197ms for enable addons: enabled=[]
	I1216 07:03:53.326324 1687487 start.go:247] waiting for cluster config update ...
	I1216 07:03:53.326358 1687487 start.go:256] writing updated cluster config ...
	I1216 07:03:53.329613 1687487 out.go:203] 
	I1216 07:03:53.332888 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:03:53.333052 1687487 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/config.json ...
	I1216 07:03:53.336576 1687487 out.go:179] * Starting "ha-614518-m02" control-plane node in "ha-614518" cluster
	I1216 07:03:53.339553 1687487 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 07:03:53.342482 1687487 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 07:03:53.345454 1687487 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 07:03:53.345546 1687487 cache.go:65] Caching tarball of preloaded images
	I1216 07:03:53.345514 1687487 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 07:03:53.345877 1687487 preload.go:238] Found /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 07:03:53.345913 1687487 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 07:03:53.346063 1687487 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/config.json ...
	I1216 07:03:53.363377 1687487 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 07:03:53.363397 1687487 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 07:03:53.363414 1687487 cache.go:243] Successfully downloaded all kic artifacts
	I1216 07:03:53.363438 1687487 start.go:360] acquireMachinesLock for ha-614518-m02: {Name:mka615bda267fcf7df6d6dfdc68cac769a75315d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 07:03:53.363497 1687487 start.go:364] duration metric: took 36.119µs to acquireMachinesLock for "ha-614518-m02"
	I1216 07:03:53.363523 1687487 start.go:96] Skipping create...Using existing machine configuration
	I1216 07:03:53.363534 1687487 fix.go:54] fixHost starting: m02
	I1216 07:03:53.363791 1687487 cli_runner.go:164] Run: docker container inspect ha-614518-m02 --format={{.State.Status}}
	I1216 07:03:53.383383 1687487 fix.go:112] recreateIfNeeded on ha-614518-m02: state=Stopped err=<nil>
	W1216 07:03:53.383415 1687487 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 07:03:53.386537 1687487 out.go:252] * Restarting existing docker container for "ha-614518-m02" ...
	I1216 07:03:53.386636 1687487 cli_runner.go:164] Run: docker start ha-614518-m02
	I1216 07:03:53.794943 1687487 cli_runner.go:164] Run: docker container inspect ha-614518-m02 --format={{.State.Status}}
	I1216 07:03:53.822138 1687487 kic.go:430] container "ha-614518-m02" state is running.
	I1216 07:03:53.822535 1687487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614518-m02
	I1216 07:03:53.851090 1687487 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/config.json ...
	I1216 07:03:53.851356 1687487 machine.go:94] provisionDockerMachine start ...
	I1216 07:03:53.851426 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m02
	I1216 07:03:53.878317 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:03:53.878677 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34315 <nil> <nil>}
	I1216 07:03:53.878696 1687487 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 07:03:53.879342 1687487 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1216 07:03:57.124004 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-614518-m02
	
	I1216 07:03:57.124068 1687487 ubuntu.go:182] provisioning hostname "ha-614518-m02"
	I1216 07:03:57.124164 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m02
	I1216 07:03:57.173735 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:03:57.174061 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34315 <nil> <nil>}
	I1216 07:03:57.174078 1687487 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-614518-m02 && echo "ha-614518-m02" | sudo tee /etc/hostname
	I1216 07:03:57.438628 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-614518-m02
	
	I1216 07:03:57.438749 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m02
	I1216 07:03:57.472722 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:03:57.473050 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34315 <nil> <nil>}
	I1216 07:03:57.473073 1687487 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-614518-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-614518-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-614518-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 07:03:57.677870 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 07:03:57.677921 1687487 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-1596013/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-1596013/.minikube}
	I1216 07:03:57.677946 1687487 ubuntu.go:190] setting up certificates
	I1216 07:03:57.677958 1687487 provision.go:84] configureAuth start
	I1216 07:03:57.678055 1687487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614518-m02
	I1216 07:03:57.722106 1687487 provision.go:143] copyHostCerts
	I1216 07:03:57.722151 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 07:03:57.722185 1687487 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem, removing ...
	I1216 07:03:57.722198 1687487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 07:03:57.722276 1687487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem (1078 bytes)
	I1216 07:03:57.722357 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 07:03:57.722379 1687487 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem, removing ...
	I1216 07:03:57.722388 1687487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 07:03:57.722421 1687487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem (1123 bytes)
	I1216 07:03:57.722465 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 07:03:57.722489 1687487 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem, removing ...
	I1216 07:03:57.722498 1687487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 07:03:57.722529 1687487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem (1675 bytes)
	I1216 07:03:57.722633 1687487 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem org=jenkins.ha-614518-m02 san=[127.0.0.1 192.168.49.3 ha-614518-m02 localhost minikube]
	I1216 07:03:57.844425 1687487 provision.go:177] copyRemoteCerts
	I1216 07:03:57.844504 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 07:03:57.844548 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m02
	I1216 07:03:57.862917 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34315 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518-m02/id_rsa Username:docker}
	I1216 07:03:57.972376 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1216 07:03:57.972445 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 07:03:58.017243 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1216 07:03:58.017311 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1216 07:03:58.059767 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1216 07:03:58.059828 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 07:03:58.113177 1687487 provision.go:87] duration metric: took 435.20178ms to configureAuth
	I1216 07:03:58.113246 1687487 ubuntu.go:206] setting minikube options for container-runtime
	I1216 07:03:58.113513 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:03:58.113663 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m02
	I1216 07:03:58.142721 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:03:58.143019 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34315 <nil> <nil>}
	I1216 07:03:58.143032 1687487 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 07:03:59.702077 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 07:03:59.702157 1687487 machine.go:97] duration metric: took 5.850782021s to provisionDockerMachine
	I1216 07:03:59.702183 1687487 start.go:293] postStartSetup for "ha-614518-m02" (driver="docker")
	I1216 07:03:59.702253 1687487 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 07:03:59.702337 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 07:03:59.702409 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m02
	I1216 07:03:59.738247 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34315 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518-m02/id_rsa Username:docker}
	I1216 07:03:59.855085 1687487 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 07:03:59.858756 1687487 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 07:03:59.858785 1687487 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 07:03:59.858797 1687487 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/addons for local assets ...
	I1216 07:03:59.858854 1687487 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/files for local assets ...
	I1216 07:03:59.858930 1687487 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> 15992552.pem in /etc/ssl/certs
	I1216 07:03:59.858937 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> /etc/ssl/certs/15992552.pem
	I1216 07:03:59.859038 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 07:03:59.868409 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 07:03:59.890719 1687487 start.go:296] duration metric: took 188.504339ms for postStartSetup
	I1216 07:03:59.890855 1687487 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 07:03:59.890922 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m02
	I1216 07:03:59.909691 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34315 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518-m02/id_rsa Username:docker}
	I1216 07:04:00.010830 1687487 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 07:04:00.053896 1687487 fix.go:56] duration metric: took 6.690353109s for fixHost
	I1216 07:04:00.053984 1687487 start.go:83] releasing machines lock for "ha-614518-m02", held for 6.690472315s
	I1216 07:04:00.054132 1687487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614518-m02
	I1216 07:04:00.100321 1687487 out.go:179] * Found network options:
	I1216 07:04:00.105391 1687487 out.go:179]   - NO_PROXY=192.168.49.2
	W1216 07:04:00.108450 1687487 proxy.go:120] fail to check proxy env: Error ip not in block
	W1216 07:04:00.108636 1687487 proxy.go:120] fail to check proxy env: Error ip not in block
	I1216 07:04:00.108742 1687487 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 07:04:00.108814 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m02
	I1216 07:04:00.109177 1687487 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 07:04:00.115341 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m02
	I1216 07:04:00.165700 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34315 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518-m02/id_rsa Username:docker}
	I1216 07:04:00.232046 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34315 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518-m02/id_rsa Username:docker}
	I1216 07:04:00.645936 1687487 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 07:04:00.658871 1687487 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 07:04:00.658994 1687487 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 07:04:00.687970 1687487 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 07:04:00.688053 1687487 start.go:496] detecting cgroup driver to use...
	I1216 07:04:00.688101 1687487 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 07:04:00.688186 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 07:04:00.715577 1687487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 07:04:00.751617 1687487 docker.go:218] disabling cri-docker service (if available) ...
	I1216 07:04:00.751681 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 07:04:00.778303 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 07:04:00.802164 1687487 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 07:04:01.047882 1687487 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 07:04:01.301807 1687487 docker.go:234] disabling docker service ...
	I1216 07:04:01.301880 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 07:04:01.322236 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 07:04:01.348117 1687487 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 07:04:01.593311 1687487 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 07:04:01.834030 1687487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 07:04:01.858526 1687487 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 07:04:01.886506 1687487 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 07:04:01.886622 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:04:01.922317 1687487 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 07:04:01.922463 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:04:01.953232 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:04:01.971302 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:04:01.993804 1687487 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 07:04:02.013934 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:04:02.031424 1687487 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:04:02.046246 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:04:02.066027 1687487 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 07:04:02.080394 1687487 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 07:04:02.095283 1687487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 07:04:02.419550 1687487 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 07:05:32.857802 1687487 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.438149921s)
	I1216 07:05:32.857827 1687487 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 07:05:32.857897 1687487 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 07:05:32.861796 1687487 start.go:564] Will wait 60s for crictl version
	I1216 07:05:32.861879 1687487 ssh_runner.go:195] Run: which crictl
	I1216 07:05:32.865559 1687487 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 07:05:32.893251 1687487 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 07:05:32.893334 1687487 ssh_runner.go:195] Run: crio --version
	I1216 07:05:32.921229 1687487 ssh_runner.go:195] Run: crio --version
	I1216 07:05:32.960111 1687487 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 07:05:32.963074 1687487 out.go:179]   - env NO_PROXY=192.168.49.2
	I1216 07:05:32.965965 1687487 cli_runner.go:164] Run: docker network inspect ha-614518 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 07:05:32.983713 1687487 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 07:05:32.988187 1687487 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 07:05:32.998448 1687487 mustload.go:66] Loading cluster: ha-614518
	I1216 07:05:32.998787 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:05:32.999107 1687487 cli_runner.go:164] Run: docker container inspect ha-614518 --format={{.State.Status}}
	I1216 07:05:33.020295 1687487 host.go:66] Checking if "ha-614518" exists ...
	I1216 07:05:33.020623 1687487 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518 for IP: 192.168.49.3
	I1216 07:05:33.020635 1687487 certs.go:195] generating shared ca certs ...
	I1216 07:05:33.020650 1687487 certs.go:227] acquiring lock for ca certs: {Name:mkbf72d2e438185e2867d262e148d82e5455cccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:05:33.020784 1687487 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key
	I1216 07:05:33.020838 1687487 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key
	I1216 07:05:33.020847 1687487 certs.go:257] generating profile certs ...
	I1216 07:05:33.020922 1687487 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/client.key
	I1216 07:05:33.020982 1687487 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.key.10d34f0f
	I1216 07:05:33.021018 1687487 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/proxy-client.key
	I1216 07:05:33.021037 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1216 07:05:33.021050 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1216 07:05:33.021075 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1216 07:05:33.021088 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1216 07:05:33.021102 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1216 07:05:33.021114 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1216 07:05:33.021125 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1216 07:05:33.021135 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1216 07:05:33.021191 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem (1338 bytes)
	W1216 07:05:33.021222 1687487 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255_empty.pem, impossibly tiny 0 bytes
	I1216 07:05:33.021230 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 07:05:33.021255 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem (1078 bytes)
	I1216 07:05:33.021279 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem (1123 bytes)
	I1216 07:05:33.021303 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem (1675 bytes)
	I1216 07:05:33.021363 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 07:05:33.021393 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:05:33.021405 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem -> /usr/share/ca-certificates/1599255.pem
	I1216 07:05:33.021415 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> /usr/share/ca-certificates/15992552.pem
	I1216 07:05:33.021480 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518
	I1216 07:05:33.040303 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34310 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518/id_rsa Username:docker}
	I1216 07:05:33.132825 1687487 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1216 07:05:33.136811 1687487 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1216 07:05:33.145267 1687487 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1216 07:05:33.148926 1687487 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1216 07:05:33.157749 1687487 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1216 07:05:33.161324 1687487 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1216 07:05:33.170007 1687487 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1216 07:05:33.174232 1687487 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1216 07:05:33.182495 1687487 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1216 07:05:33.186607 1687487 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1216 07:05:33.194939 1687487 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1216 07:05:33.198815 1687487 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1216 07:05:33.207734 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 07:05:33.226981 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 07:05:33.246475 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 07:05:33.265061 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 07:05:33.284210 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1216 07:05:33.306195 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 07:05:33.324956 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 07:05:33.343476 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 07:05:33.361548 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 07:05:33.380428 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem --> /usr/share/ca-certificates/1599255.pem (1338 bytes)
	I1216 07:05:33.398886 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /usr/share/ca-certificates/15992552.pem (1708 bytes)
	I1216 07:05:33.416891 1687487 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1216 07:05:33.430017 1687487 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1216 07:05:33.442986 1687487 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1216 07:05:33.456178 1687487 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1216 07:05:33.469704 1687487 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1216 07:05:33.484299 1687487 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1216 07:05:33.499729 1687487 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1216 07:05:33.516041 1687487 ssh_runner.go:195] Run: openssl version
	I1216 07:05:33.524362 1687487 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1599255.pem
	I1216 07:05:33.532162 1687487 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1599255.pem /etc/ssl/certs/1599255.pem
	I1216 07:05:33.540324 1687487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1599255.pem
	I1216 07:05:33.544918 1687487 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 06:24 /usr/share/ca-certificates/1599255.pem
	I1216 07:05:33.544995 1687487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1599255.pem
	I1216 07:05:33.585992 1687487 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 07:05:33.593625 1687487 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15992552.pem
	I1216 07:05:33.601101 1687487 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15992552.pem /etc/ssl/certs/15992552.pem
	I1216 07:05:33.608445 1687487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15992552.pem
	I1216 07:05:33.613481 1687487 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 06:24 /usr/share/ca-certificates/15992552.pem
	I1216 07:05:33.613546 1687487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15992552.pem
	I1216 07:05:33.656579 1687487 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 07:05:33.664104 1687487 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:05:33.671624 1687487 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 07:05:33.679463 1687487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:05:33.683654 1687487 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:05:33.683720 1687487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:05:33.725052 1687487 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 07:05:33.733624 1687487 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 07:05:33.737572 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 07:05:33.781425 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 07:05:33.824276 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 07:05:33.865794 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 07:05:33.909050 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 07:05:33.951953 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 07:05:33.993867 1687487 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.2 crio true true} ...
	I1216 07:05:33.993976 1687487 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-614518-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-614518 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 07:05:33.994007 1687487 kube-vip.go:115] generating kube-vip config ...
	I1216 07:05:33.994059 1687487 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1216 07:05:34.009409 1687487 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1216 07:05:34.009486 1687487 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1216 07:05:34.009582 1687487 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 07:05:34.018576 1687487 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 07:05:34.018674 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1216 07:05:34.027410 1687487 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1216 07:05:34.042363 1687487 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 07:05:34.056182 1687487 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1216 07:05:34.074014 1687487 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1216 07:05:34.077990 1687487 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 07:05:34.088295 1687487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 07:05:34.232095 1687487 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 07:05:34.247231 1687487 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 07:05:34.247603 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:05:34.253170 1687487 out.go:179] * Verifying Kubernetes components...
	I1216 07:05:34.255848 1687487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 07:05:34.381731 1687487 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 07:05:34.396551 1687487 kapi.go:59] client config for ha-614518: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/client.crt", KeyFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/client.key", CAFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1216 07:05:34.396622 1687487 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1216 07:05:34.397115 1687487 node_ready.go:35] waiting up to 6m0s for node "ha-614518-m02" to be "Ready" ...
	I1216 07:05:37.040586 1687487 node_ready.go:49] node "ha-614518-m02" is "Ready"
	I1216 07:05:37.040621 1687487 node_ready.go:38] duration metric: took 2.643481502s for node "ha-614518-m02" to be "Ready" ...
	I1216 07:05:37.040635 1687487 api_server.go:52] waiting for apiserver process to appear ...
	I1216 07:05:37.040695 1687487 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:05:37.061374 1687487 api_server.go:72] duration metric: took 2.814094s to wait for apiserver process to appear ...
	I1216 07:05:37.061401 1687487 api_server.go:88] waiting for apiserver healthz status ...
	I1216 07:05:37.061420 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:37.074087 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:37.074124 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:37.561699 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:37.575722 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:37.575749 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:38.062105 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:38.073942 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:38.073979 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:38.561534 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:38.571539 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:38.571575 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:39.062243 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:39.070626 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:39.070656 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:39.562250 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:39.570668 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:39.570709 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:40.062490 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:40.071222 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:40.071258 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:40.561835 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:40.570234 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:40.570267 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:41.062517 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:41.070865 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:41.070907 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:41.562123 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:41.570314 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:41.570354 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:42.061560 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:42.070019 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:42.070066 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:42.561525 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:42.575709 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:42.575741 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:43.062386 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:43.072157 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:43.072235 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:43.561622 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:43.569766 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:43.569792 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:44.062378 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:44.073021 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:44.073060 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:44.562264 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:44.570578 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:44.570610 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:45.063004 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:45.074685 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:45.074724 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:45.562091 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:45.570321 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:45.570358 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:46.062073 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:46.070931 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:46.070966 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:46.561565 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:46.569995 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:46.570026 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:47.061616 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:47.072095 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:47.072131 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:47.561577 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:47.570812 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:47.570839 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:48.062047 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:48.070373 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:48.070403 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:48.562094 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:48.570453 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:48.570491 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:49.062122 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:49.070449 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:49.070490 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:49.561963 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:49.570228 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:49.570254 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:50.061859 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:50.070692 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:50.070727 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:50.562001 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:50.570230 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:50.570256 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:51.061757 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:51.070029 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:51.070062 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:51.561541 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:51.570443 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:51.570470 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:52.061863 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:52.070098 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:52.070127 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:52.561554 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:52.571992 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:52.572023 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:53.061596 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:53.069723 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:53.069756 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:53.562103 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:53.570175 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:53.570210 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:54.061674 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:54.069916 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:54.069946 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:54.561543 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:54.569758 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:54.569785 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:55.062452 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:55.071750 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:55.071778 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:55.562411 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:55.572141 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:55.572172 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:56.061606 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:56.070095 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:56.070177 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:56.561548 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:56.569665 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:56.569692 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:57.061801 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:57.069953 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:57.069981 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:57.561491 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:57.569864 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:57.569901 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:58.062468 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:58.070718 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:58.070747 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:58.562420 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:58.584824 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:58.584854 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:59.062385 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:59.070501 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:59.070541 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:59.561854 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:59.569961 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:59.569992 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:00.061869 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:00.114940 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:06:00.115034 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:00.561553 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:00.570378 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:06:00.570407 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:01.062023 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:01.070600 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:06:01.070633 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:01.562296 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:01.570659 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:06:01.570688 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:02.062180 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:02.070681 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:06:02.070728 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:02.562216 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:02.570655 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:06:02.570684 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:03.062338 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:03.071577 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:06:03.071605 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:03.562262 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:03.570378 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:06:03.570415 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:04.061866 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:04.070630 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:06:04.070665 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:04.562372 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:04.573063 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:06:04.573103 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:05.061594 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:05.070425 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1216 07:06:05.071905 1687487 api_server.go:141] control plane version: v1.34.2
	I1216 07:06:05.071945 1687487 api_server.go:131] duration metric: took 28.010531893s to wait for apiserver health ...
	I1216 07:06:05.071959 1687487 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 07:06:05.081048 1687487 system_pods.go:59] 26 kube-system pods found
	I1216 07:06:05.081158 1687487 system_pods.go:61] "coredns-66bc5c9577-j2dlk" [7cdee874-13b2-4689-accf-e066854554a5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 07:06:05.081176 1687487 system_pods.go:61] "coredns-66bc5c9577-wnl5v" [9256d5c3-7034-467c-8cd0-d6f4987701c7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 07:06:05.081183 1687487 system_pods.go:61] "etcd-ha-614518" [dec5e097-b96b-40dd-a2f9-a9182668648e] Running
	I1216 07:06:05.081188 1687487 system_pods.go:61] "etcd-ha-614518-m02" [5998a7f5-5092-4768-b87a-c510c308efda] Running
	I1216 07:06:05.081192 1687487 system_pods.go:61] "etcd-ha-614518-m03" [d0a65bae-d842-4e55-85d9-ae1d6429088c] Running
	I1216 07:06:05.081197 1687487 system_pods.go:61] "kindnet-4gbf2" [b5285121-5662-466c-929f-6fe0e623e252] Running
	I1216 07:06:05.081201 1687487 system_pods.go:61] "kindnet-kwm49" [3a07c975-5ae6-434e-a9da-c68833c8a6dc] Running
	I1216 07:06:05.081204 1687487 system_pods.go:61] "kindnet-qpdxp" [44975bb5-380a-4313-99bd-df7510492688] Running
	I1216 07:06:05.081208 1687487 system_pods.go:61] "kindnet-t2849" [14c37491-38c8-4d32-89e2-d5065c21a976] Running
	I1216 07:06:05.081223 1687487 system_pods.go:61] "kube-apiserver-ha-614518" [51b10c5f-bf67-430b-85d7-ba31c2602e9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 07:06:05.081228 1687487 system_pods.go:61] "kube-apiserver-ha-614518-m02" [b25aee21-ddf1-4fc7-87e2-92a70d851d7a] Running
	I1216 07:06:05.081233 1687487 system_pods.go:61] "kube-apiserver-ha-614518-m03" [79a42481-9723-4f77-aec4-5d5727a98c63] Running
	I1216 07:06:05.081244 1687487 system_pods.go:61] "kube-controller-manager-ha-614518" [42894aa1-df0a-43d9-9a93-5b6141db631c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 07:06:05.081249 1687487 system_pods.go:61] "kube-controller-manager-ha-614518-m02" [984e14b1-d933-4792-b225-65a0fce5c8ac] Running
	I1216 07:06:05.081262 1687487 system_pods.go:61] "kube-controller-manager-ha-614518-m03" [b455cb3c-7c98-4ec2-9ce0-36e5c2f3b8cf] Running
	I1216 07:06:05.081266 1687487 system_pods.go:61] "kube-proxy-4kdt5" [45eb7aa5-bb99-4da3-883f-cdd380715c71] Running
	I1216 07:06:05.081270 1687487 system_pods.go:61] "kube-proxy-bmxpt" [573f4950-4197-4e95-90e8-93a2ec8bd016] Running
	I1216 07:06:05.081276 1687487 system_pods.go:61] "kube-proxy-fhwcs" [f6d4a561-d45e-4149-b00a-9fc8ef22017f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 07:06:05.081291 1687487 system_pods.go:61] "kube-proxy-qqr57" [bfce576a-7733-4a72-acf8-33d64dd3287a] Running
	I1216 07:06:05.081296 1687487 system_pods.go:61] "kube-scheduler-ha-614518" [ce73c116-9a87-4180-add6-fb07eb04c9a0] Running
	I1216 07:06:05.081301 1687487 system_pods.go:61] "kube-scheduler-ha-614518-m02" [249b5f83-63be-4691-87b1-5e25e13865ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 07:06:05.081305 1687487 system_pods.go:61] "kube-scheduler-ha-614518-m03" [db57c26e-9813-4b2b-b70b-0a07ed119aaa] Running
	I1216 07:06:05.081309 1687487 system_pods.go:61] "kube-vip-ha-614518" [e7bcfc9a-42b0-4066-9bb1-4abf917e98b9] Running
	I1216 07:06:05.081313 1687487 system_pods.go:61] "kube-vip-ha-614518-m02" [e662027d-d25a-4273-bdb7-9e21f666839e] Running
	I1216 07:06:05.081317 1687487 system_pods.go:61] "kube-vip-ha-614518-m03" [edab6af2-c513-479d-a2c8-c474380ca5d9] Running
	I1216 07:06:05.081323 1687487 system_pods.go:61] "storage-provisioner" [c8b9c00b-10bc-423c-b16e-3f3cdb12e907] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 07:06:05.081329 1687487 system_pods.go:74] duration metric: took 9.364099ms to wait for pod list to return data ...
	I1216 07:06:05.081337 1687487 default_sa.go:34] waiting for default service account to be created ...
	I1216 07:06:05.084727 1687487 default_sa.go:45] found service account: "default"
	I1216 07:06:05.084759 1687487 default_sa.go:55] duration metric: took 3.415392ms for default service account to be created ...
	I1216 07:06:05.084770 1687487 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 07:06:05.092252 1687487 system_pods.go:86] 26 kube-system pods found
	I1216 07:06:05.092293 1687487 system_pods.go:89] "coredns-66bc5c9577-j2dlk" [7cdee874-13b2-4689-accf-e066854554a5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 07:06:05.092305 1687487 system_pods.go:89] "coredns-66bc5c9577-wnl5v" [9256d5c3-7034-467c-8cd0-d6f4987701c7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 07:06:05.092311 1687487 system_pods.go:89] "etcd-ha-614518" [dec5e097-b96b-40dd-a2f9-a9182668648e] Running
	I1216 07:06:05.092318 1687487 system_pods.go:89] "etcd-ha-614518-m02" [5998a7f5-5092-4768-b87a-c510c308efda] Running
	I1216 07:06:05.092322 1687487 system_pods.go:89] "etcd-ha-614518-m03" [d0a65bae-d842-4e55-85d9-ae1d6429088c] Running
	I1216 07:06:05.092327 1687487 system_pods.go:89] "kindnet-4gbf2" [b5285121-5662-466c-929f-6fe0e623e252] Running
	I1216 07:06:05.092331 1687487 system_pods.go:89] "kindnet-kwm49" [3a07c975-5ae6-434e-a9da-c68833c8a6dc] Running
	I1216 07:06:05.092336 1687487 system_pods.go:89] "kindnet-qpdxp" [44975bb5-380a-4313-99bd-df7510492688] Running
	I1216 07:06:05.092346 1687487 system_pods.go:89] "kindnet-t2849" [14c37491-38c8-4d32-89e2-d5065c21a976] Running
	I1216 07:06:05.092353 1687487 system_pods.go:89] "kube-apiserver-ha-614518" [51b10c5f-bf67-430b-85d7-ba31c2602e9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 07:06:05.092360 1687487 system_pods.go:89] "kube-apiserver-ha-614518-m02" [b25aee21-ddf1-4fc7-87e2-92a70d851d7a] Running
	I1216 07:06:05.092365 1687487 system_pods.go:89] "kube-apiserver-ha-614518-m03" [79a42481-9723-4f77-aec4-5d5727a98c63] Running
	I1216 07:06:05.092376 1687487 system_pods.go:89] "kube-controller-manager-ha-614518" [42894aa1-df0a-43d9-9a93-5b6141db631c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 07:06:05.092381 1687487 system_pods.go:89] "kube-controller-manager-ha-614518-m02" [984e14b1-d933-4792-b225-65a0fce5c8ac] Running
	I1216 07:06:05.092388 1687487 system_pods.go:89] "kube-controller-manager-ha-614518-m03" [b455cb3c-7c98-4ec2-9ce0-36e5c2f3b8cf] Running
	I1216 07:06:05.092392 1687487 system_pods.go:89] "kube-proxy-4kdt5" [45eb7aa5-bb99-4da3-883f-cdd380715c71] Running
	I1216 07:06:05.092399 1687487 system_pods.go:89] "kube-proxy-bmxpt" [573f4950-4197-4e95-90e8-93a2ec8bd016] Running
	I1216 07:06:05.092411 1687487 system_pods.go:89] "kube-proxy-fhwcs" [f6d4a561-d45e-4149-b00a-9fc8ef22017f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 07:06:05.092416 1687487 system_pods.go:89] "kube-proxy-qqr57" [bfce576a-7733-4a72-acf8-33d64dd3287a] Running
	I1216 07:06:05.092421 1687487 system_pods.go:89] "kube-scheduler-ha-614518" [ce73c116-9a87-4180-add6-fb07eb04c9a0] Running
	I1216 07:06:05.092426 1687487 system_pods.go:89] "kube-scheduler-ha-614518-m02" [249b5f83-63be-4691-87b1-5e25e13865ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 07:06:05.092433 1687487 system_pods.go:89] "kube-scheduler-ha-614518-m03" [db57c26e-9813-4b2b-b70b-0a07ed119aaa] Running
	I1216 07:06:05.092438 1687487 system_pods.go:89] "kube-vip-ha-614518" [e7bcfc9a-42b0-4066-9bb1-4abf917e98b9] Running
	I1216 07:06:05.092445 1687487 system_pods.go:89] "kube-vip-ha-614518-m02" [e662027d-d25a-4273-bdb7-9e21f666839e] Running
	I1216 07:06:05.092449 1687487 system_pods.go:89] "kube-vip-ha-614518-m03" [edab6af2-c513-479d-a2c8-c474380ca5d9] Running
	I1216 07:06:05.092455 1687487 system_pods.go:89] "storage-provisioner" [c8b9c00b-10bc-423c-b16e-3f3cdb12e907] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 07:06:05.092495 1687487 system_pods.go:126] duration metric: took 7.68911ms to wait for k8s-apps to be running ...
	I1216 07:06:05.092507 1687487 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 07:06:05.092570 1687487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 07:06:05.107026 1687487 system_svc.go:56] duration metric: took 14.508711ms WaitForService to wait for kubelet
	I1216 07:06:05.107098 1687487 kubeadm.go:587] duration metric: took 30.859823393s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 07:06:05.107133 1687487 node_conditions.go:102] verifying NodePressure condition ...
	I1216 07:06:05.110974 1687487 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1216 07:06:05.111054 1687487 node_conditions.go:123] node cpu capacity is 2
	I1216 07:06:05.111086 1687487 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1216 07:06:05.111110 1687487 node_conditions.go:123] node cpu capacity is 2
	I1216 07:06:05.111145 1687487 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1216 07:06:05.111170 1687487 node_conditions.go:123] node cpu capacity is 2
	I1216 07:06:05.111190 1687487 node_conditions.go:105] duration metric: took 4.037891ms to run NodePressure ...
	I1216 07:06:05.111216 1687487 start.go:242] waiting for startup goroutines ...
	I1216 07:06:05.111269 1687487 start.go:256] writing updated cluster config ...
	I1216 07:06:05.116668 1687487 out.go:203] 
	I1216 07:06:05.120812 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:06:05.120934 1687487 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/config.json ...
	I1216 07:06:05.124552 1687487 out.go:179] * Starting "ha-614518-m04" worker node in "ha-614518" cluster
	I1216 07:06:05.128339 1687487 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 07:06:05.132036 1687487 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 07:06:05.135120 1687487 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 07:06:05.135153 1687487 cache.go:65] Caching tarball of preloaded images
	I1216 07:06:05.135238 1687487 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 07:06:05.135318 1687487 preload.go:238] Found /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 07:06:05.135332 1687487 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 07:06:05.135455 1687487 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/config.json ...
	I1216 07:06:05.157793 1687487 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 07:06:05.157815 1687487 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 07:06:05.157833 1687487 cache.go:243] Successfully downloaded all kic artifacts
	I1216 07:06:05.157859 1687487 start.go:360] acquireMachinesLock for ha-614518-m04: {Name:mk43a7770b67c048f75b229b4d32a0d7d442337b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 07:06:05.157933 1687487 start.go:364] duration metric: took 53.449µs to acquireMachinesLock for "ha-614518-m04"
	I1216 07:06:05.157958 1687487 start.go:96] Skipping create...Using existing machine configuration
	I1216 07:06:05.157970 1687487 fix.go:54] fixHost starting: m04
	I1216 07:06:05.158264 1687487 cli_runner.go:164] Run: docker container inspect ha-614518-m04 --format={{.State.Status}}
	I1216 07:06:05.178507 1687487 fix.go:112] recreateIfNeeded on ha-614518-m04: state=Stopped err=<nil>
	W1216 07:06:05.178535 1687487 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 07:06:05.182229 1687487 out.go:252] * Restarting existing docker container for "ha-614518-m04" ...
	I1216 07:06:05.182326 1687487 cli_runner.go:164] Run: docker start ha-614518-m04
	I1216 07:06:05.490568 1687487 cli_runner.go:164] Run: docker container inspect ha-614518-m04 --format={{.State.Status}}
	I1216 07:06:05.514214 1687487 kic.go:430] container "ha-614518-m04" state is running.
	I1216 07:06:05.514594 1687487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614518-m04
	I1216 07:06:05.536033 1687487 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/config.json ...
	I1216 07:06:05.536263 1687487 machine.go:94] provisionDockerMachine start ...
	I1216 07:06:05.536336 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m04
	I1216 07:06:05.566891 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:06:05.567347 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34320 <nil> <nil>}
	I1216 07:06:05.567367 1687487 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 07:06:05.568162 1687487 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1216 07:06:08.712253 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-614518-m04
	
	I1216 07:06:08.712286 1687487 ubuntu.go:182] provisioning hostname "ha-614518-m04"
	I1216 07:06:08.712350 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m04
	I1216 07:06:08.732562 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:06:08.732911 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34320 <nil> <nil>}
	I1216 07:06:08.732931 1687487 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-614518-m04 && echo "ha-614518-m04" | sudo tee /etc/hostname
	I1216 07:06:08.889442 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-614518-m04
	
	I1216 07:06:08.889531 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m04
	I1216 07:06:08.909382 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:06:08.909721 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34320 <nil> <nil>}
	I1216 07:06:08.909743 1687487 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-614518-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-614518-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-614518-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 07:06:09.077198 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 07:06:09.077226 1687487 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-1596013/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-1596013/.minikube}
	I1216 07:06:09.077243 1687487 ubuntu.go:190] setting up certificates
	I1216 07:06:09.077252 1687487 provision.go:84] configureAuth start
	I1216 07:06:09.077348 1687487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614518-m04
	I1216 07:06:09.099011 1687487 provision.go:143] copyHostCerts
	I1216 07:06:09.099061 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 07:06:09.099099 1687487 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem, removing ...
	I1216 07:06:09.099113 1687487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 07:06:09.099193 1687487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem (1675 bytes)
	I1216 07:06:09.099292 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 07:06:09.099317 1687487 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem, removing ...
	I1216 07:06:09.099324 1687487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 07:06:09.099359 1687487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem (1078 bytes)
	I1216 07:06:09.099417 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 07:06:09.099439 1687487 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem, removing ...
	I1216 07:06:09.099448 1687487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 07:06:09.099477 1687487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem (1123 bytes)
	I1216 07:06:09.099540 1687487 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem org=jenkins.ha-614518-m04 san=[127.0.0.1 192.168.49.5 ha-614518-m04 localhost minikube]
	I1216 07:06:09.342772 1687487 provision.go:177] copyRemoteCerts
	I1216 07:06:09.342883 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 07:06:09.342952 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m04
	I1216 07:06:09.362064 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34320 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518-m04/id_rsa Username:docker}
	I1216 07:06:09.461352 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1216 07:06:09.461413 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 07:06:09.488306 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1216 07:06:09.488377 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1216 07:06:09.511681 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1216 07:06:09.511745 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 07:06:09.532372 1687487 provision.go:87] duration metric: took 455.10562ms to configureAuth
	I1216 07:06:09.532402 1687487 ubuntu.go:206] setting minikube options for container-runtime
	I1216 07:06:09.532749 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:06:09.532862 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m04
	I1216 07:06:09.550583 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:06:09.550921 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34320 <nil> <nil>}
	I1216 07:06:09.550942 1687487 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 07:06:09.906062 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 07:06:09.906129 1687487 machine.go:97] duration metric: took 4.369846916s to provisionDockerMachine
	I1216 07:06:09.906156 1687487 start.go:293] postStartSetup for "ha-614518-m04" (driver="docker")
	I1216 07:06:09.906186 1687487 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 07:06:09.906302 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 07:06:09.906394 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m04
	I1216 07:06:09.928571 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34320 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518-m04/id_rsa Username:docker}
	I1216 07:06:10.043685 1687487 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 07:06:10.067794 1687487 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 07:06:10.067836 1687487 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 07:06:10.067850 1687487 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/addons for local assets ...
	I1216 07:06:10.067926 1687487 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/files for local assets ...
	I1216 07:06:10.068023 1687487 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> 15992552.pem in /etc/ssl/certs
	I1216 07:06:10.068034 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> /etc/ssl/certs/15992552.pem
	I1216 07:06:10.068175 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 07:06:10.080979 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 07:06:10.111023 1687487 start.go:296] duration metric: took 204.832511ms for postStartSetup
	I1216 07:06:10.111182 1687487 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 07:06:10.111258 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m04
	I1216 07:06:10.133434 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34320 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518-m04/id_rsa Username:docker}
	I1216 07:06:10.243926 1687487 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 07:06:10.252839 1687487 fix.go:56] duration metric: took 5.094861586s for fixHost
	I1216 07:06:10.252868 1687487 start.go:83] releasing machines lock for "ha-614518-m04", held for 5.094922297s
	I1216 07:06:10.252940 1687487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614518-m04
	I1216 07:06:10.273934 1687487 out.go:179] * Found network options:
	I1216 07:06:10.276892 1687487 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1216 07:06:10.279702 1687487 proxy.go:120] fail to check proxy env: Error ip not in block
	W1216 07:06:10.279739 1687487 proxy.go:120] fail to check proxy env: Error ip not in block
	W1216 07:06:10.279765 1687487 proxy.go:120] fail to check proxy env: Error ip not in block
	W1216 07:06:10.279776 1687487 proxy.go:120] fail to check proxy env: Error ip not in block
	I1216 07:06:10.279853 1687487 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 07:06:10.279897 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m04
	I1216 07:06:10.280186 1687487 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 07:06:10.280250 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m04
	I1216 07:06:10.304141 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34320 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518-m04/id_rsa Username:docker}
	I1216 07:06:10.316532 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34320 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518-m04/id_rsa Username:docker}
	I1216 07:06:10.464790 1687487 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 07:06:10.529284 1687487 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 07:06:10.529353 1687487 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 07:06:10.550769 1687487 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 07:06:10.550846 1687487 start.go:496] detecting cgroup driver to use...
	I1216 07:06:10.550924 1687487 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 07:06:10.551036 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 07:06:10.576598 1687487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 07:06:10.598097 1687487 docker.go:218] disabling cri-docker service (if available) ...
	I1216 07:06:10.598259 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 07:06:10.618172 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 07:06:10.634284 1687487 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 07:06:10.768085 1687487 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 07:06:10.900504 1687487 docker.go:234] disabling docker service ...
	I1216 07:06:10.900581 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 07:06:10.927152 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 07:06:10.942383 1687487 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 07:06:11.076847 1687487 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 07:06:11.223349 1687487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 07:06:11.239694 1687487 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 07:06:11.255054 1687487 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 07:06:11.255145 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:06:11.266034 1687487 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 07:06:11.266152 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:06:11.276524 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:06:11.286271 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:06:11.297358 1687487 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 07:06:11.307624 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:06:11.322735 1687487 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:06:11.331594 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:06:11.341363 1687487 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 07:06:11.355843 1687487 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 07:06:11.364696 1687487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 07:06:11.491229 1687487 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 07:06:11.671501 1687487 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 07:06:11.671633 1687487 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 07:06:11.675428 1687487 start.go:564] Will wait 60s for crictl version
	I1216 07:06:11.675526 1687487 ssh_runner.go:195] Run: which crictl
	I1216 07:06:11.679282 1687487 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 07:06:11.704854 1687487 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 07:06:11.704992 1687487 ssh_runner.go:195] Run: crio --version
	I1216 07:06:11.737456 1687487 ssh_runner.go:195] Run: crio --version
	I1216 07:06:11.775396 1687487 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 07:06:11.778421 1687487 out.go:179]   - env NO_PROXY=192.168.49.2
	I1216 07:06:11.781653 1687487 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1216 07:06:11.784682 1687487 cli_runner.go:164] Run: docker network inspect ha-614518 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 07:06:11.801080 1687487 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 07:06:11.805027 1687487 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 07:06:11.815307 1687487 mustload.go:66] Loading cluster: ha-614518
	I1216 07:06:11.815555 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:06:11.815814 1687487 cli_runner.go:164] Run: docker container inspect ha-614518 --format={{.State.Status}}
	I1216 07:06:11.835520 1687487 host.go:66] Checking if "ha-614518" exists ...
	I1216 07:06:11.835825 1687487 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518 for IP: 192.168.49.5
	I1216 07:06:11.835840 1687487 certs.go:195] generating shared ca certs ...
	I1216 07:06:11.835857 1687487 certs.go:227] acquiring lock for ca certs: {Name:mkbf72d2e438185e2867d262e148d82e5455cccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:06:11.835999 1687487 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key
	I1216 07:06:11.836046 1687487 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key
	I1216 07:06:11.836063 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1216 07:06:11.836076 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1216 07:06:11.836096 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1216 07:06:11.836113 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1216 07:06:11.836166 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem (1338 bytes)
	W1216 07:06:11.836212 1687487 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255_empty.pem, impossibly tiny 0 bytes
	I1216 07:06:11.836243 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 07:06:11.836281 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem (1078 bytes)
	I1216 07:06:11.836313 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem (1123 bytes)
	I1216 07:06:11.836348 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem (1675 bytes)
	I1216 07:06:11.836418 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 07:06:11.836451 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:06:11.836505 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem -> /usr/share/ca-certificates/1599255.pem
	I1216 07:06:11.836521 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> /usr/share/ca-certificates/15992552.pem
	I1216 07:06:11.836544 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 07:06:11.859722 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 07:06:11.879459 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 07:06:11.899359 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 07:06:11.925816 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 07:06:11.944678 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem --> /usr/share/ca-certificates/1599255.pem (1338 bytes)
	I1216 07:06:11.966397 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /usr/share/ca-certificates/15992552.pem (1708 bytes)
	I1216 07:06:11.991349 1687487 ssh_runner.go:195] Run: openssl version
	I1216 07:06:11.998038 1687487 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:06:12.010525 1687487 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 07:06:12.021207 1687487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:06:12.026113 1687487 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:06:12.026229 1687487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:06:12.070208 1687487 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 07:06:12.077832 1687487 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1599255.pem
	I1216 07:06:12.085281 1687487 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1599255.pem /etc/ssl/certs/1599255.pem
	I1216 07:06:12.093355 1687487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1599255.pem
	I1216 07:06:12.097389 1687487 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 06:24 /usr/share/ca-certificates/1599255.pem
	I1216 07:06:12.097457 1687487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1599255.pem
	I1216 07:06:12.138619 1687487 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 07:06:12.146494 1687487 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15992552.pem
	I1216 07:06:12.153809 1687487 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15992552.pem /etc/ssl/certs/15992552.pem
	I1216 07:06:12.162460 1687487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15992552.pem
	I1216 07:06:12.166549 1687487 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 06:24 /usr/share/ca-certificates/15992552.pem
	I1216 07:06:12.166660 1687487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15992552.pem
	I1216 07:06:12.214872 1687487 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 07:06:12.223038 1687487 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 07:06:12.226786 1687487 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 07:06:12.226832 1687487 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.2  false true} ...
	I1216 07:06:12.226911 1687487 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-614518-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-614518 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 07:06:12.227009 1687487 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 07:06:12.235141 1687487 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 07:06:12.235238 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1216 07:06:12.243052 1687487 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1216 07:06:12.258163 1687487 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 07:06:12.272841 1687487 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1216 07:06:12.276276 1687487 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 07:06:12.286557 1687487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 07:06:12.414923 1687487 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 07:06:12.430788 1687487 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime: ControlPlane:false Worker:true}
	I1216 07:06:12.431230 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:06:12.434498 1687487 out.go:179] * Verifying Kubernetes components...
	I1216 07:06:12.437537 1687487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 07:06:12.560193 1687487 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 07:06:12.575224 1687487 kapi.go:59] client config for ha-614518: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/client.crt", KeyFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/client.key", CAFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1216 07:06:12.575297 1687487 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1216 07:06:12.575574 1687487 node_ready.go:35] waiting up to 6m0s for node "ha-614518-m04" to be "Ready" ...
	I1216 07:06:12.580068 1687487 node_ready.go:49] node "ha-614518-m04" is "Ready"
	I1216 07:06:12.580146 1687487 node_ready.go:38] duration metric: took 4.550298ms for node "ha-614518-m04" to be "Ready" ...
	I1216 07:06:12.580174 1687487 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 07:06:12.580258 1687487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 07:06:12.596724 1687487 system_svc.go:56] duration metric: took 16.541875ms WaitForService to wait for kubelet
	I1216 07:06:12.596751 1687487 kubeadm.go:587] duration metric: took 165.918494ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 07:06:12.596771 1687487 node_conditions.go:102] verifying NodePressure condition ...
	I1216 07:06:12.600376 1687487 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1216 07:06:12.600404 1687487 node_conditions.go:123] node cpu capacity is 2
	I1216 07:06:12.600416 1687487 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1216 07:06:12.600421 1687487 node_conditions.go:123] node cpu capacity is 2
	I1216 07:06:12.600449 1687487 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1216 07:06:12.600453 1687487 node_conditions.go:123] node cpu capacity is 2
	I1216 07:06:12.600511 1687487 node_conditions.go:105] duration metric: took 3.699966ms to run NodePressure ...
	I1216 07:06:12.600548 1687487 start.go:242] waiting for startup goroutines ...
	I1216 07:06:12.600573 1687487 start.go:256] writing updated cluster config ...
	I1216 07:06:12.600919 1687487 ssh_runner.go:195] Run: rm -f paused
	I1216 07:06:12.604585 1687487 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 07:06:12.605147 1687487 kapi.go:59] client config for ha-614518: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/client.crt", KeyFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/client.key", CAFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 07:06:12.622024 1687487 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-j2dlk" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 07:06:14.630183 1687487 pod_ready.go:104] pod "coredns-66bc5c9577-j2dlk" is not "Ready", error: <nil>
	W1216 07:06:17.128396 1687487 pod_ready.go:104] pod "coredns-66bc5c9577-j2dlk" is not "Ready", error: <nil>
	W1216 07:06:19.129109 1687487 pod_ready.go:104] pod "coredns-66bc5c9577-j2dlk" is not "Ready", error: <nil>
	W1216 07:06:21.129471 1687487 pod_ready.go:104] pod "coredns-66bc5c9577-j2dlk" is not "Ready", error: <nil>
	W1216 07:06:23.629238 1687487 pod_ready.go:104] pod "coredns-66bc5c9577-j2dlk" is not "Ready", error: <nil>
	I1216 07:06:24.644123 1687487 pod_ready.go:94] pod "coredns-66bc5c9577-j2dlk" is "Ready"
	I1216 07:06:24.644155 1687487 pod_ready.go:86] duration metric: took 12.022101955s for pod "coredns-66bc5c9577-j2dlk" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:24.644167 1687487 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wnl5v" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:25.653985 1687487 pod_ready.go:94] pod "coredns-66bc5c9577-wnl5v" is "Ready"
	I1216 07:06:25.654011 1687487 pod_ready.go:86] duration metric: took 1.009837557s for pod "coredns-66bc5c9577-wnl5v" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:25.657436 1687487 pod_ready.go:83] waiting for pod "etcd-ha-614518" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:25.663112 1687487 pod_ready.go:94] pod "etcd-ha-614518" is "Ready"
	I1216 07:06:25.663199 1687487 pod_ready.go:86] duration metric: took 5.737586ms for pod "etcd-ha-614518" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:25.663224 1687487 pod_ready.go:83] waiting for pod "etcd-ha-614518-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:25.668572 1687487 pod_ready.go:94] pod "etcd-ha-614518-m02" is "Ready"
	I1216 07:06:25.668654 1687487 pod_ready.go:86] duration metric: took 5.405889ms for pod "etcd-ha-614518-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:25.668681 1687487 pod_ready.go:83] waiting for pod "etcd-ha-614518-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:25.673835 1687487 pod_ready.go:99] pod "etcd-ha-614518-m03" in "kube-system" namespace is gone: node "ha-614518-m03" hosting pod "etcd-ha-614518-m03" is not found/running (skipping!): nodes "ha-614518-m03" not found
	I1216 07:06:25.673908 1687487 pod_ready.go:86] duration metric: took 5.206207ms for pod "etcd-ha-614518-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:25.823380 1687487 request.go:683] "Waited before sending request" delay="149.293024ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1216 07:06:25.826990 1687487 pod_ready.go:83] waiting for pod "kube-apiserver-ha-614518" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:26.023449 1687487 request.go:683] "Waited before sending request" delay="196.318606ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-614518"
	I1216 07:06:26.223386 1687487 request.go:683] "Waited before sending request" delay="196.351246ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518"
	I1216 07:06:26.226414 1687487 pod_ready.go:94] pod "kube-apiserver-ha-614518" is "Ready"
	I1216 07:06:26.226443 1687487 pod_ready.go:86] duration metric: took 399.426362ms for pod "kube-apiserver-ha-614518" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:26.226454 1687487 pod_ready.go:83] waiting for pod "kube-apiserver-ha-614518-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:26.422838 1687487 request.go:683] "Waited before sending request" delay="196.262613ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-614518-m02"
	I1216 07:06:26.623137 1687487 request.go:683] "Waited before sending request" delay="197.08654ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518-m02"
	I1216 07:06:26.626398 1687487 pod_ready.go:94] pod "kube-apiserver-ha-614518-m02" is "Ready"
	I1216 07:06:26.626428 1687487 pod_ready.go:86] duration metric: took 399.966937ms for pod "kube-apiserver-ha-614518-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:26.626438 1687487 pod_ready.go:83] waiting for pod "kube-apiserver-ha-614518-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:26.822787 1687487 request.go:683] "Waited before sending request" delay="196.265148ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-614518-m03"
	I1216 07:06:27.023430 1687487 request.go:683] "Waited before sending request" delay="197.365ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518-m03"
	I1216 07:06:27.026875 1687487 pod_ready.go:99] pod "kube-apiserver-ha-614518-m03" in "kube-system" namespace is gone: node "ha-614518-m03" hosting pod "kube-apiserver-ha-614518-m03" is not found/running (skipping!): nodes "ha-614518-m03" not found
	I1216 07:06:27.026914 1687487 pod_ready.go:86] duration metric: took 400.4598ms for pod "kube-apiserver-ha-614518-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:27.223376 1687487 request.go:683] "Waited before sending request" delay="196.348931ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1216 07:06:27.227355 1687487 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-614518" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:27.423607 1687487 request.go:683] "Waited before sending request" delay="196.15765ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-614518"
	I1216 07:06:27.623198 1687487 request.go:683] "Waited before sending request" delay="196.252798ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518"
	I1216 07:06:27.822756 1687487 request.go:683] "Waited before sending request" delay="94.181569ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-614518"
	I1216 07:06:28.023498 1687487 request.go:683] "Waited before sending request" delay="197.337742ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518"
	I1216 07:06:28.423277 1687487 request.go:683] "Waited before sending request" delay="191.324919ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518"
	I1216 07:06:28.823130 1687487 request.go:683] "Waited before sending request" delay="90.229358ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518"
	W1216 07:06:29.235219 1687487 pod_ready.go:104] pod "kube-controller-manager-ha-614518" is not "Ready", error: <nil>
	W1216 07:06:31.235951 1687487 pod_ready.go:104] pod "kube-controller-manager-ha-614518" is not "Ready", error: <nil>
	W1216 07:06:33.734756 1687487 pod_ready.go:104] pod "kube-controller-manager-ha-614518" is not "Ready", error: <nil>
	W1216 07:06:35.735390 1687487 pod_ready.go:104] pod "kube-controller-manager-ha-614518" is not "Ready", error: <nil>
	W1216 07:06:38.234527 1687487 pod_ready.go:104] pod "kube-controller-manager-ha-614518" is not "Ready", error: <nil>
	W1216 07:06:40.734172 1687487 pod_ready.go:104] pod "kube-controller-manager-ha-614518" is not "Ready", error: <nil>
	W1216 07:06:42.734590 1687487 pod_ready.go:104] pod "kube-controller-manager-ha-614518" is not "Ready", error: <nil>
	I1216 07:06:43.234658 1687487 pod_ready.go:94] pod "kube-controller-manager-ha-614518" is "Ready"
	I1216 07:06:43.234687 1687487 pod_ready.go:86] duration metric: took 16.007305361s for pod "kube-controller-manager-ha-614518" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.234697 1687487 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-614518-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.246154 1687487 pod_ready.go:94] pod "kube-controller-manager-ha-614518-m02" is "Ready"
	I1216 07:06:43.246184 1687487 pod_ready.go:86] duration metric: took 11.479167ms for pod "kube-controller-manager-ha-614518-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.246194 1687487 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-614518-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.251708 1687487 pod_ready.go:99] pod "kube-controller-manager-ha-614518-m03" in "kube-system" namespace is gone: node "ha-614518-m03" hosting pod "kube-controller-manager-ha-614518-m03" is not found/running (skipping!): nodes "ha-614518-m03" not found
	I1216 07:06:43.251789 1687487 pod_ready.go:86] duration metric: took 5.587232ms for pod "kube-controller-manager-ha-614518-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.255005 1687487 pod_ready.go:83] waiting for pod "kube-proxy-4kdt5" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.260772 1687487 pod_ready.go:94] pod "kube-proxy-4kdt5" is "Ready"
	I1216 07:06:43.260800 1687487 pod_ready.go:86] duration metric: took 5.764523ms for pod "kube-proxy-4kdt5" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.260811 1687487 pod_ready.go:83] waiting for pod "kube-proxy-bmxpt" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.427957 1687487 request.go:683] "Waited before sending request" delay="164.183098ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518-m04"
	I1216 07:06:43.431695 1687487 pod_ready.go:94] pod "kube-proxy-bmxpt" is "Ready"
	I1216 07:06:43.431727 1687487 pod_ready.go:86] duration metric: took 170.908436ms for pod "kube-proxy-bmxpt" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.431744 1687487 pod_ready.go:83] waiting for pod "kube-proxy-fhwcs" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.628038 1687487 request.go:683] "Waited before sending request" delay="196.208729ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fhwcs"
	I1216 07:06:43.827976 1687487 request.go:683] "Waited before sending request" delay="196.30094ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518-m02"
	I1216 07:06:43.837294 1687487 pod_ready.go:94] pod "kube-proxy-fhwcs" is "Ready"
	I1216 07:06:43.837327 1687487 pod_ready.go:86] duration metric: took 405.576793ms for pod "kube-proxy-fhwcs" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.837339 1687487 pod_ready.go:83] waiting for pod "kube-proxy-qqr57" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:44.028582 1687487 request.go:683] "Waited before sending request" delay="191.164568ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qqr57"
	I1216 07:06:44.031704 1687487 pod_ready.go:99] pod "kube-proxy-qqr57" in "kube-system" namespace is gone: getting pod "kube-proxy-qqr57" in "kube-system" namespace (will retry): pods "kube-proxy-qqr57" not found
	I1216 07:06:44.031728 1687487 pod_ready.go:86] duration metric: took 194.382484ms for pod "kube-proxy-qqr57" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:44.228023 1687487 request.go:683] "Waited before sending request" delay="196.190299ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I1216 07:06:44.234797 1687487 pod_ready.go:83] waiting for pod "kube-scheduler-ha-614518" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:44.428282 1687487 request.go:683] "Waited before sending request" delay="193.336711ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-614518"
	I1216 07:06:44.627997 1687487 request.go:683] "Waited before sending request" delay="196.267207ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518"
	I1216 07:06:44.631577 1687487 pod_ready.go:94] pod "kube-scheduler-ha-614518" is "Ready"
	I1216 07:06:44.631604 1687487 pod_ready.go:86] duration metric: took 396.729655ms for pod "kube-scheduler-ha-614518" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:44.631613 1687487 pod_ready.go:83] waiting for pod "kube-scheduler-ha-614518-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:44.828815 1687487 request.go:683] "Waited before sending request" delay="197.130733ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-614518-m02"
	I1216 07:06:45.028338 1687487 request.go:683] "Waited before sending request" delay="191.46624ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518-m02"
	I1216 07:06:45.228724 1687487 request.go:683] "Waited before sending request" delay="96.318053ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-614518-m02"
	I1216 07:06:45.428563 1687487 request.go:683] "Waited before sending request" delay="191.750075ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518-m02"
	I1216 07:06:45.828353 1687487 request.go:683] "Waited before sending request" delay="192.34026ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518-m02"
	I1216 07:06:46.228325 1687487 request.go:683] "Waited before sending request" delay="93.248724ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518-m02"
	W1216 07:06:46.637948 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:06:49.139119 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:06:51.638109 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:06:53.638454 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:06:56.139011 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:06:58.638095 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:00.638769 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:03.139265 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:05.638593 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:07.638799 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:10.138642 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:12.638602 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:14.641618 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:17.139071 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:19.638792 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:22.138682 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:24.143581 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:26.637942 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:28.638514 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:30.639228 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:32.639571 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:35.139503 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:37.142108 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:39.637866 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:41.638931 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:44.139294 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:46.638205 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:48.638829 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:50.643744 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:53.139962 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:55.140229 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:57.638356 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:00.161064 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:02.638288 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:04.640454 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:07.138771 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:09.638023 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:11.638274 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:13.638989 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:16.137649 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:18.138649 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:20.138856 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:22.638044 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:25.139148 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:27.638438 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:29.638561 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:31.638878 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:34.138583 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:36.638791 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:39.138672 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:41.143386 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:43.638185 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:45.640021 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:48.137933 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:50.638587 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:53.138384 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:55.138692 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:57.638524 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:00.191960 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:02.638290 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:04.639287 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:07.139404 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:09.638715 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:12.137968 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:14.138290 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:16.138420 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:18.638585 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:20.639656 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:23.138623 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:25.638409 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:27.643066 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:30.140779 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:32.638747 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:34.639250 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:37.137644 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:39.138045 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:41.138733 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:43.139171 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:45.142012 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:47.638719 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:50.139130 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:52.637794 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:54.638451 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:57.137807 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:59.640347 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:10:02.138615 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:10:04.140843 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:10:06.639153 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:10:09.139049 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:10:11.139172 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	I1216 07:10:12.605718 1687487 pod_ready.go:86] duration metric: took 3m27.974087596s for pod "kube-scheduler-ha-614518-m02" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 07:10:12.605749 1687487 pod_ready.go:65] not all pods in "kube-system" namespace with "component=kube-scheduler" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1216 07:10:12.605764 1687487 pod_ready.go:40] duration metric: took 4m0.001147095s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 07:10:12.608877 1687487 out.go:203] 
	W1216 07:10:12.611764 1687487 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1216 07:10:12.614690 1687487 out.go:203] 
	
	
	==> CRI-O <==
	Dec 16 07:06:33 ha-614518 crio[669]: time="2025-12-16T07:06:33.124962814Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 16 07:06:33 ha-614518 crio[669]: time="2025-12-16T07:06:33.124989079Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 16 07:06:33 ha-614518 crio[669]: time="2025-12-16T07:06:33.128952589Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 16 07:06:33 ha-614518 crio[669]: time="2025-12-16T07:06:33.128991022Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 16 07:06:33 ha-614518 crio[669]: time="2025-12-16T07:06:33.12901366Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 16 07:06:33 ha-614518 crio[669]: time="2025-12-16T07:06:33.132385483Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 16 07:06:33 ha-614518 crio[669]: time="2025-12-16T07:06:33.132445241Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 16 07:06:33 ha-614518 crio[669]: time="2025-12-16T07:06:33.132506854Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 16 07:06:33 ha-614518 crio[669]: time="2025-12-16T07:06:33.13550428Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 16 07:06:33 ha-614518 crio[669]: time="2025-12-16T07:06:33.135541393Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 16 07:06:40 ha-614518 conmon[1338]: conmon 5fb83a33391310c66121 <ninfo>: container 1340 exited with status 1
	Dec 16 07:06:41 ha-614518 crio[669]: time="2025-12-16T07:06:41.136243764Z" level=info msg="Removing container: 4b611d8c213d6b291fb7a3b72450bf97b5b458e31038413638c4e1e9a6beaaf7" id=a7929592-8844-46c4-be42-8dc29f75bdf8 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 07:06:41 ha-614518 crio[669]: time="2025-12-16T07:06:41.144051183Z" level=info msg="Error loading conmon cgroup of container 4b611d8c213d6b291fb7a3b72450bf97b5b458e31038413638c4e1e9a6beaaf7: cgroup deleted" id=a7929592-8844-46c4-be42-8dc29f75bdf8 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 07:06:41 ha-614518 crio[669]: time="2025-12-16T07:06:41.148857672Z" level=info msg="Removed container 4b611d8c213d6b291fb7a3b72450bf97b5b458e31038413638c4e1e9a6beaaf7: kube-system/storage-provisioner/storage-provisioner" id=a7929592-8844-46c4-be42-8dc29f75bdf8 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 07:07:21 ha-614518 crio[669]: time="2025-12-16T07:07:21.517109075Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=970ca7aa-d95d-4794-95bf-de423f4d674f name=/runtime.v1.ImageService/ImageStatus
	Dec 16 07:07:21 ha-614518 crio[669]: time="2025-12-16T07:07:21.51851262Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a32745bb-0259-4638-a2da-ddc22003b22b name=/runtime.v1.ImageService/ImageStatus
	Dec 16 07:07:21 ha-614518 crio[669]: time="2025-12-16T07:07:21.519651775Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=d94c0706-3299-449d-b1bc-9c7684af150f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 07:07:21 ha-614518 crio[669]: time="2025-12-16T07:07:21.519773393Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 07:07:21 ha-614518 crio[669]: time="2025-12-16T07:07:21.524607418Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 07:07:21 ha-614518 crio[669]: time="2025-12-16T07:07:21.524785537Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/312caaa5394938283ea578f1d27f8818b3e8f0134608b0a17d956f12767c2e19/merged/etc/passwd: no such file or directory"
	Dec 16 07:07:21 ha-614518 crio[669]: time="2025-12-16T07:07:21.524806846Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/312caaa5394938283ea578f1d27f8818b3e8f0134608b0a17d956f12767c2e19/merged/etc/group: no such file or directory"
	Dec 16 07:07:21 ha-614518 crio[669]: time="2025-12-16T07:07:21.525065295Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 07:07:21 ha-614518 crio[669]: time="2025-12-16T07:07:21.544389448Z" level=info msg="Created container 1093de574e036685973850230e9a40aa67d2a34b14bfd15aac259b4e32258a56: kube-system/storage-provisioner/storage-provisioner" id=d94c0706-3299-449d-b1bc-9c7684af150f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 07:07:21 ha-614518 crio[669]: time="2025-12-16T07:07:21.546147943Z" level=info msg="Starting container: 1093de574e036685973850230e9a40aa67d2a34b14bfd15aac259b4e32258a56" id=2384d913-1660-45b4-a9e4-4a12ccf89aa1 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 07:07:21 ha-614518 crio[669]: time="2025-12-16T07:07:21.552972588Z" level=info msg="Started container" PID=1546 containerID=1093de574e036685973850230e9a40aa67d2a34b14bfd15aac259b4e32258a56 description=kube-system/storage-provisioner/storage-provisioner id=2384d913-1660-45b4-a9e4-4a12ccf89aa1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=66940fef199cd7ea95fa467d76afd336228ac898a0c1f0e8c7b18e7972031eff
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	1093de574e036       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   2 minutes ago       Running             storage-provisioner       7                   66940fef199cd       storage-provisioner                 kube-system
	62f5148caf573       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   3 minutes ago       Running             kube-controller-manager   8                   b4a4e435e1aa0       kube-controller-manager-ha-614518   kube-system
	5fb83a3339131       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   4 minutes ago       Exited              storage-provisioner       6                   66940fef199cd       storage-provisioner                 kube-system
	95092e298b4a2       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   4 minutes ago       Exited              kube-controller-manager   7                   b4a4e435e1aa0       kube-controller-manager-ha-614518   kube-system
	d39155885e822       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   6 minutes ago       Running             coredns                   2                   041859eb301b3       coredns-66bc5c9577-j2dlk            kube-system
	6e64e350bfcdb       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   6 minutes ago       Running             kindnet-cni               2                   ceeed389a3540       kindnet-t2849                       kube-system
	df7febb900c92       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   6 minutes ago       Running             kube-proxy                2                   e6f1de1edc5ee       kube-proxy-4kdt5                    kube-system
	e3a995a401390       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   6 minutes ago       Running             busybox                   2                   288cd575c38a7       busybox-7b57f96db7-9rkhz            default
	a0d878c4d93ed       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   6 minutes ago       Running             coredns                   2                   6735c66af1b27       coredns-66bc5c9577-wnl5v            kube-system
	11e4b44d62d54       369db9dfa6fa96c1f4a0f3c827dbe864b5ded1802c8b4810b5ff9fcc5f5f2c70   6 minutes ago       Running             kube-vip                  2                   01654879d92ce       kube-vip-ha-614518                  kube-system
	b6e4d702970e6       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   6 minutes ago       Running             etcd                      2                   b24e85033a9a6       etcd-ha-614518                      kube-system
	c0e9d15ebb1cd       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   6 minutes ago       Running             kube-scheduler            2                   2ec038c0eb369       kube-scheduler-ha-614518            kube-system
	db591d0d437f8       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   6 minutes ago       Running             kube-apiserver            2                   3ea7ac550801f       kube-apiserver-ha-614518            kube-system
	
	
	==> coredns [a0d878c4d93ed5aa6b99a6ea96df4f5ccb53c918a3bac903f7dae29fc1cf61ee] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [d39155885e822c355840ab6f40d6597b04bb705e1978f74a686ce74f90174ae9] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-614518
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-614518
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f
	                    minikube.k8s.io/name=ha-614518
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T06_55_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 06:55:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-614518
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 07:10:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 07:06:25 +0000   Tue, 16 Dec 2025 06:55:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 07:06:25 +0000   Tue, 16 Dec 2025 06:55:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 07:06:25 +0000   Tue, 16 Dec 2025 06:55:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 07:06:25 +0000   Tue, 16 Dec 2025 07:02:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-614518
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                95037a50-a335-45c4-b961-153de44dd8af
	  Boot ID:                    c02b8f3a-b639-46a9-b38c-18c198a7a8c0
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-9rkhz             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-j2dlk             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 coredns-66bc5c9577-wnl5v             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 etcd-ha-614518                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-t2849                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-614518             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-614518    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-4kdt5                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-614518             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-614518                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m17s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 14m                    kube-proxy       
	  Normal   Starting                 3m58s                  kube-proxy       
	  Normal   Starting                 8m15s                  kube-proxy       
	  Warning  CgroupV1                 14m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    14m                    kubelet          Node ha-614518 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  14m                    kubelet          Node ha-614518 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     14m                    kubelet          Node ha-614518 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           14m                    node-controller  Node ha-614518 event: Registered Node ha-614518 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-614518 event: Registered Node ha-614518 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-614518 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-614518 event: Registered Node ha-614518 in Controller
	  Normal   RegisteredNode           9m21s                  node-controller  Node ha-614518 event: Registered Node ha-614518 in Controller
	  Normal   NodeHasSufficientPID     8m51s (x8 over 8m51s)  kubelet          Node ha-614518 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  8m51s (x8 over 8m51s)  kubelet          Node ha-614518 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m51s (x8 over 8m51s)  kubelet          Node ha-614518 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           7m56s                  node-controller  Node ha-614518 event: Registered Node ha-614518 in Controller
	  Normal   RegisteredNode           7m47s                  node-controller  Node ha-614518 event: Registered Node ha-614518 in Controller
	  Normal   RegisteredNode           7m35s                  node-controller  Node ha-614518 event: Registered Node ha-614518 in Controller
	  Normal   Starting                 6m26s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m26s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m26s (x8 over 6m26s)  kubelet          Node ha-614518 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m26s (x8 over 6m26s)  kubelet          Node ha-614518 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m26s (x8 over 6m26s)  kubelet          Node ha-614518 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m35s                  node-controller  Node ha-614518 event: Registered Node ha-614518 in Controller
	  Normal   RegisteredNode           3m46s                  node-controller  Node ha-614518 event: Registered Node ha-614518 in Controller
	
	
	Name:               ha-614518-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-614518-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f
	                    minikube.k8s.io/name=ha-614518
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_16T06_56_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 06:56:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-614518-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 07:10:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 07:08:44 +0000   Tue, 16 Dec 2025 07:00:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 07:08:44 +0000   Tue, 16 Dec 2025 07:00:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 07:08:44 +0000   Tue, 16 Dec 2025 07:00:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 07:08:44 +0000   Tue, 16 Dec 2025 07:00:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-614518-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                0e50aad9-c8f5-4539-a363-29b4940497ef
	  Boot ID:                    c02b8f3a-b639-46a9-b38c-18c198a7a8c0
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-q9kjv                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-614518-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-qpdxp                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-614518-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-614518-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-fhwcs                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-614518-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-614518-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 4m10s                  kube-proxy       
	  Normal   Starting                 8m5s                   kube-proxy       
	  Normal   RegisteredNode           13m                    node-controller  Node ha-614518-m02 event: Registered Node ha-614518-m02 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-614518-m02 event: Registered Node ha-614518-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-614518-m02 event: Registered Node ha-614518-m02 in Controller
	  Normal   NodeHasNoDiskPressure    9m58s (x8 over 9m58s)  kubelet          Node ha-614518-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  9m58s (x8 over 9m58s)  kubelet          Node ha-614518-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     9m58s (x8 over 9m58s)  kubelet          Node ha-614518-m02 status is now: NodeHasSufficientPID
	  Normal   Starting                 9m58s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m58s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeNotReady             9m33s                  node-controller  Node ha-614518-m02 status is now: NodeNotReady
	  Normal   RegisteredNode           9m21s                  node-controller  Node ha-614518-m02 event: Registered Node ha-614518-m02 in Controller
	  Warning  CgroupV1                 8m48s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 8m48s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     8m47s (x8 over 8m48s)  kubelet          Node ha-614518-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  8m47s (x8 over 8m48s)  kubelet          Node ha-614518-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m47s (x8 over 8m48s)  kubelet          Node ha-614518-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           7m56s                  node-controller  Node ha-614518-m02 event: Registered Node ha-614518-m02 in Controller
	  Normal   RegisteredNode           7m47s                  node-controller  Node ha-614518-m02 event: Registered Node ha-614518-m02 in Controller
	  Normal   RegisteredNode           7m35s                  node-controller  Node ha-614518-m02 event: Registered Node ha-614518-m02 in Controller
	  Normal   Starting                 6m23s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m23s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m23s (x8 over 6m23s)  kubelet          Node ha-614518-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m23s (x8 over 6m23s)  kubelet          Node ha-614518-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m23s (x8 over 6m23s)  kubelet          Node ha-614518-m02 status is now: NodeHasSufficientPID
	  Warning  ContainerGCFailed        5m23s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m35s                  node-controller  Node ha-614518-m02 event: Registered Node ha-614518-m02 in Controller
	  Normal   RegisteredNode           3m46s                  node-controller  Node ha-614518-m02 event: Registered Node ha-614518-m02 in Controller
	
	
	Name:               ha-614518-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-614518-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f
	                    minikube.k8s.io/name=ha-614518
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_16T06_58_56_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 06:58:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-614518-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 07:10:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 07:08:41 +0000   Tue, 16 Dec 2025 06:58:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 07:08:41 +0000   Tue, 16 Dec 2025 06:58:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 07:08:41 +0000   Tue, 16 Dec 2025 06:58:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 07:08:41 +0000   Tue, 16 Dec 2025 06:59:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-614518-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                b5a1c428-1aac-458a-ac8c-b2278f4653df
	  Boot ID:                    c02b8f3a-b639-46a9-b38c-18c198a7a8c0
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-d8h6z    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m22s
	  kube-system                 kindnet-kwm49               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-proxy-bmxpt            0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m24s                  kube-proxy       
	  Normal   Starting                 11m                    kube-proxy       
	  Normal   Starting                 3m54s                  kube-proxy       
	  Normal   NodeHasSufficientPID     11m (x3 over 11m)      kubelet          Node ha-614518-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    11m (x3 over 11m)      kubelet          Node ha-614518-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  11m (x3 over 11m)      kubelet          Node ha-614518-m04 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 11m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   RegisteredNode           11m                    node-controller  Node ha-614518-m04 event: Registered Node ha-614518-m04 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-614518-m04 event: Registered Node ha-614518-m04 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-614518-m04 event: Registered Node ha-614518-m04 in Controller
	  Normal   NodeReady                10m                    kubelet          Node ha-614518-m04 status is now: NodeReady
	  Normal   RegisteredNode           9m21s                  node-controller  Node ha-614518-m04 event: Registered Node ha-614518-m04 in Controller
	  Normal   RegisteredNode           7m56s                  node-controller  Node ha-614518-m04 event: Registered Node ha-614518-m04 in Controller
	  Warning  CgroupV1                 7m48s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 7m48s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           7m47s                  node-controller  Node ha-614518-m04 event: Registered Node ha-614518-m04 in Controller
	  Normal   NodeHasNoDiskPressure    7m44s (x8 over 7m47s)  kubelet          Node ha-614518-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  7m44s (x8 over 7m47s)  kubelet          Node ha-614518-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     7m44s (x8 over 7m47s)  kubelet          Node ha-614518-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m35s                  node-controller  Node ha-614518-m04 event: Registered Node ha-614518-m04 in Controller
	  Normal   RegisteredNode           4m35s                  node-controller  Node ha-614518-m04 event: Registered Node ha-614518-m04 in Controller
	  Normal   Starting                 4m12s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m12s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m9s (x8 over 4m12s)   kubelet          Node ha-614518-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m9s (x8 over 4m12s)   kubelet          Node ha-614518-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m9s (x8 over 4m12s)   kubelet          Node ha-614518-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m46s                  node-controller  Node ha-614518-m04 event: Registered Node ha-614518-m04 in Controller
	
	
	==> dmesg <==
	[Dec16 06:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec16 06:13] overlayfs: idmapped layers are currently not supported
	[Dec16 06:19] overlayfs: idmapped layers are currently not supported
	[Dec16 06:20] overlayfs: idmapped layers are currently not supported
	[Dec16 06:38] overlayfs: idmapped layers are currently not supported
	[Dec16 06:55] overlayfs: idmapped layers are currently not supported
	[Dec16 06:56] overlayfs: idmapped layers are currently not supported
	[Dec16 06:57] overlayfs: idmapped layers are currently not supported
	[Dec16 06:58] overlayfs: idmapped layers are currently not supported
	[Dec16 07:00] overlayfs: idmapped layers are currently not supported
	[Dec16 07:01] overlayfs: idmapped layers are currently not supported
	[  +3.826905] overlayfs: idmapped layers are currently not supported
	[Dec16 07:02] overlayfs: idmapped layers are currently not supported
	[ +35.241631] overlayfs: idmapped layers are currently not supported
	[Dec16 07:03] overlayfs: idmapped layers are currently not supported
	[  +2.815105] overlayfs: idmapped layers are currently not supported
	[Dec16 07:06] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b6e4d702970e634028ab9da9ca8e258d02bb0aa908a74a428d72bd35cdec320d] <==
	{"level":"info","ts":"2025-12-16T07:05:37.111144Z","caller":"traceutil/trace.go:172","msg":"trace[478204759] range","detail":"{range_begin:/registry/configmaps; range_end:; response_count:0; response_revision:2579; }","duration":"153.657765ms","start":"2025-12-16T07:05:36.957482Z","end":"2025-12-16T07:05:37.111139Z","steps":["trace[478204759] 'agreement among raft nodes before linearized reading'  (duration: 153.641888ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.111265Z","caller":"traceutil/trace.go:172","msg":"trace[1751390839] range","detail":"{range_begin:/registry/deployments/; range_end:/registry/deployments0; response_count:2; response_revision:2579; }","duration":"153.79544ms","start":"2025-12-16T07:05:36.957465Z","end":"2025-12-16T07:05:37.111260Z","steps":["trace[1751390839] 'agreement among raft nodes before linearized reading'  (duration: 153.738282ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.111362Z","caller":"traceutil/trace.go:172","msg":"trace[28163213] range","detail":"{range_begin:/registry/resourceclaims/; range_end:/registry/resourceclaims0; response_count:0; response_revision:2579; }","duration":"153.909944ms","start":"2025-12-16T07:05:36.957447Z","end":"2025-12-16T07:05:37.111357Z","steps":["trace[28163213] 'agreement among raft nodes before linearized reading'  (duration: 153.891507ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.111450Z","caller":"traceutil/trace.go:172","msg":"trace[1848148655] range","detail":"{range_begin:/registry/volumeattachments; range_end:; response_count:0; response_revision:2579; }","duration":"154.017343ms","start":"2025-12-16T07:05:36.957428Z","end":"2025-12-16T07:05:37.111446Z","steps":["trace[1848148655] 'agreement among raft nodes before linearized reading'  (duration: 154.000407ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.111616Z","caller":"traceutil/trace.go:172","msg":"trace[2000339098] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; response_count:8; response_revision:2579; }","duration":"154.198949ms","start":"2025-12-16T07:05:36.957412Z","end":"2025-12-16T07:05:37.111611Z","steps":["trace[2000339098] 'agreement among raft nodes before linearized reading'  (duration: 154.103251ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.111724Z","caller":"traceutil/trace.go:172","msg":"trace[1572280958] range","detail":"{range_begin:/registry/validatingwebhookconfigurations/; range_end:/registry/validatingwebhookconfigurations0; response_count:0; response_revision:2579; }","duration":"154.327336ms","start":"2025-12-16T07:05:36.957392Z","end":"2025-12-16T07:05:37.111719Z","steps":["trace[1572280958] 'agreement among raft nodes before linearized reading'  (duration: 154.307217ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.111819Z","caller":"traceutil/trace.go:172","msg":"trace[1774297222] range","detail":"{range_begin:/registry/namespaces; range_end:; response_count:0; response_revision:2579; }","duration":"154.438599ms","start":"2025-12-16T07:05:36.957375Z","end":"2025-12-16T07:05:37.111814Z","steps":["trace[1774297222] 'agreement among raft nodes before linearized reading'  (duration: 154.420826ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.111903Z","caller":"traceutil/trace.go:172","msg":"trace[2023288818] range","detail":"{range_begin:/registry/services/specs; range_end:; response_count:0; response_revision:2579; }","duration":"154.540352ms","start":"2025-12-16T07:05:36.957359Z","end":"2025-12-16T07:05:37.111899Z","steps":["trace[2023288818] 'agreement among raft nodes before linearized reading'  (duration: 154.524622ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.111988Z","caller":"traceutil/trace.go:172","msg":"trace[1206328630] range","detail":"{range_begin:/registry/secrets; range_end:; response_count:0; response_revision:2579; }","duration":"154.643664ms","start":"2025-12-16T07:05:36.957341Z","end":"2025-12-16T07:05:37.111985Z","steps":["trace[1206328630] 'agreement among raft nodes before linearized reading'  (duration: 154.626515ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.112132Z","caller":"traceutil/trace.go:172","msg":"trace[1906072706] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; response_count:11; response_revision:2579; }","duration":"154.802829ms","start":"2025-12-16T07:05:36.957325Z","end":"2025-12-16T07:05:37.112128Z","steps":["trace[1906072706] 'agreement among raft nodes before linearized reading'  (duration: 154.723689ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.112237Z","caller":"traceutil/trace.go:172","msg":"trace[871907471] range","detail":"{range_begin:/registry/statefulsets; range_end:; response_count:0; response_revision:2579; }","duration":"154.926703ms","start":"2025-12-16T07:05:36.957305Z","end":"2025-12-16T07:05:37.112231Z","steps":["trace[871907471] 'agreement among raft nodes before linearized reading'  (duration: 154.909981ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.112324Z","caller":"traceutil/trace.go:172","msg":"trace[864616195] range","detail":"{range_begin:/registry/persistentvolumes/; range_end:/registry/persistentvolumes0; response_count:0; response_revision:2579; }","duration":"155.901796ms","start":"2025-12-16T07:05:36.956418Z","end":"2025-12-16T07:05:37.112320Z","steps":["trace[864616195] 'agreement among raft nodes before linearized reading'  (duration: 155.884368ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.112450Z","caller":"traceutil/trace.go:172","msg":"trace[1944004236] range","detail":"{range_begin:/registry/certificatesigningrequests/; range_end:/registry/certificatesigningrequests0; response_count:4; response_revision:2579; }","duration":"156.045585ms","start":"2025-12-16T07:05:36.956400Z","end":"2025-12-16T07:05:37.112445Z","steps":["trace[1944004236] 'agreement among raft nodes before linearized reading'  (duration: 155.989461ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.117587Z","caller":"traceutil/trace.go:172","msg":"trace[1696147218] range","detail":"{range_begin:/registry/priorityclasses/; range_end:/registry/priorityclasses0; response_count:2; response_revision:2579; }","duration":"161.197491ms","start":"2025-12-16T07:05:36.956382Z","end":"2025-12-16T07:05:37.117580Z","steps":["trace[1696147218] 'agreement among raft nodes before linearized reading'  (duration: 161.11758ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.117791Z","caller":"traceutil/trace.go:172","msg":"trace[223674650] range","detail":"{range_begin:/registry/rolebindings/; range_end:/registry/rolebindings0; response_count:12; response_revision:2579; }","duration":"161.419196ms","start":"2025-12-16T07:05:36.956367Z","end":"2025-12-16T07:05:37.117786Z","steps":["trace[223674650] 'agreement among raft nodes before linearized reading'  (duration: 161.346531ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.117988Z","caller":"traceutil/trace.go:172","msg":"trace[185344251] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/; range_end:/registry/apiregistration.k8s.io/apiservices0; response_count:21; response_revision:2579; }","duration":"161.63387ms","start":"2025-12-16T07:05:36.956349Z","end":"2025-12-16T07:05:37.117983Z","steps":["trace[185344251] 'agreement among raft nodes before linearized reading'  (duration: 161.538615ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.118106Z","caller":"traceutil/trace.go:172","msg":"trace[1006990986] range","detail":"{range_begin:/registry/persistentvolumes; range_end:; response_count:0; response_revision:2579; }","duration":"161.781088ms","start":"2025-12-16T07:05:36.956320Z","end":"2025-12-16T07:05:37.118101Z","steps":["trace[1006990986] 'agreement among raft nodes before linearized reading'  (duration: 161.7579ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.118199Z","caller":"traceutil/trace.go:172","msg":"trace[958088094] range","detail":"{range_begin:/registry/runtimeclasses/; range_end:/registry/runtimeclasses0; response_count:0; response_revision:2579; }","duration":"161.891234ms","start":"2025-12-16T07:05:36.956302Z","end":"2025-12-16T07:05:37.118194Z","steps":["trace[958088094] 'agreement among raft nodes before linearized reading'  (duration: 161.870491ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.118290Z","caller":"traceutil/trace.go:172","msg":"trace[450497122] range","detail":"{range_begin:/registry/certificatesigningrequests; range_end:; response_count:0; response_revision:2579; }","duration":"162.006707ms","start":"2025-12-16T07:05:36.956279Z","end":"2025-12-16T07:05:37.118286Z","steps":["trace[450497122] 'agreement among raft nodes before linearized reading'  (duration: 161.989566ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.118483Z","caller":"traceutil/trace.go:172","msg":"trace[1764111923] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:0; response_revision:2579; }","duration":"162.653352ms","start":"2025-12-16T07:05:36.955825Z","end":"2025-12-16T07:05:37.118478Z","steps":["trace[1764111923] 'agreement among raft nodes before linearized reading'  (duration: 162.534417ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.118605Z","caller":"traceutil/trace.go:172","msg":"trace[1832685940] range","detail":"{range_begin:/registry/events/default/ha-614518.1881a02a52654ab4; range_end:; response_count:1; response_revision:2579; }","duration":"162.798881ms","start":"2025-12-16T07:05:36.955802Z","end":"2025-12-16T07:05:37.118601Z","steps":["trace[1832685940] 'agreement among raft nodes before linearized reading'  (duration: 162.752414ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.118689Z","caller":"traceutil/trace.go:172","msg":"trace[1910732752] range","detail":"{range_begin:/registry/resourceslices; range_end:; response_count:0; response_revision:2579; }","duration":"164.621942ms","start":"2025-12-16T07:05:36.954063Z","end":"2025-12-16T07:05:37.118685Z","steps":["trace[1910732752] 'agreement among raft nodes before linearized reading'  (duration: 164.605434ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.118767Z","caller":"traceutil/trace.go:172","msg":"trace[1026138916] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2579; }","duration":"175.255622ms","start":"2025-12-16T07:05:36.943507Z","end":"2025-12-16T07:05:37.118763Z","steps":["trace[1026138916] 'agreement among raft nodes before linearized reading'  (duration: 175.242846ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.139792Z","caller":"traceutil/trace.go:172","msg":"trace[1783019215] transaction","detail":"{read_only:false; response_revision:2580; number_of_response:1; }","duration":"102.74853ms","start":"2025-12-16T07:05:37.037022Z","end":"2025-12-16T07:05:37.139771Z","steps":["trace[1783019215] 'process raft request'  (duration: 100.394402ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T07:05:37.179847Z","caller":"traceutil/trace.go:172","msg":"trace[311341052] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2582; }","duration":"143.874409ms","start":"2025-12-16T07:05:37.035961Z","end":"2025-12-16T07:05:37.179836Z","steps":["trace[311341052] 'agreement among raft nodes before linearized reading'  (duration: 139.025227ms)"],"step_count":1}
	
	
	==> kernel <==
	 07:10:18 up  9:52,  0 user,  load average: 0.53, 1.58, 1.57
	Linux ha-614518 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6e64e350bfcdb0ad3cefabf63e1a4acc10762dcf6c5cfb20629a03af5db77445] <==
	I1216 07:09:33.121926       1 main.go:324] Node ha-614518-m04 has CIDR [10.244.3.0/24] 
	I1216 07:09:43.121776       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 07:09:43.121809       1 main.go:301] handling current node
	I1216 07:09:43.121825       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1216 07:09:43.121831       1 main.go:324] Node ha-614518-m02 has CIDR [10.244.1.0/24] 
	I1216 07:09:43.122006       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1216 07:09:43.122019       1 main.go:324] Node ha-614518-m04 has CIDR [10.244.3.0/24] 
	I1216 07:09:53.121506       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1216 07:09:53.121631       1 main.go:324] Node ha-614518-m02 has CIDR [10.244.1.0/24] 
	I1216 07:09:53.121782       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1216 07:09:53.121798       1 main.go:324] Node ha-614518-m04 has CIDR [10.244.3.0/24] 
	I1216 07:09:53.121854       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 07:09:53.121866       1 main.go:301] handling current node
	I1216 07:10:03.121659       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 07:10:03.121690       1 main.go:301] handling current node
	I1216 07:10:03.121707       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1216 07:10:03.121714       1 main.go:324] Node ha-614518-m02 has CIDR [10.244.1.0/24] 
	I1216 07:10:03.121865       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1216 07:10:03.121880       1 main.go:324] Node ha-614518-m04 has CIDR [10.244.3.0/24] 
	I1216 07:10:13.120855       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1216 07:10:13.120895       1 main.go:324] Node ha-614518-m02 has CIDR [10.244.1.0/24] 
	I1216 07:10:13.121050       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1216 07:10:13.121069       1 main.go:324] Node ha-614518-m04 has CIDR [10.244.3.0/24] 
	I1216 07:10:13.121178       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 07:10:13.121192       1 main.go:301] handling current node
	
	
	==> kube-apiserver [db591d0d437f81b8c65552b6efbd2ca8fb29bb1e0989d62b2cce8be69b46105c] <==
	{"level":"warn","ts":"2025-12-16T07:05:36.946229Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40018af860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.946250Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40030372c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.946270Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40010305a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.946293Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001f4b680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.946313Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001cd7c20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.946331Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001f4a780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.946455Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40018ae3c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.946777Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001cd65a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953177Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40026a5680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953240Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400137ba40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953261Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001c85c20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953280Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001c84000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953301Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400274cb40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953323Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40018563c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953342Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40018574a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953358Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001cd65a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953376Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001856d20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953393Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001cd72c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953410Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40017050e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953428Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002be8f00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953452Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002213c20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953475Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400286b2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953492Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001f4a000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	I1216 07:05:52.809366       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	W1216 07:06:04.987898       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	
	
	==> kube-controller-manager [62f5148caf57328eb2231340bd1f0fda0819319965c786abfdb83aeb5ed01f5e] <==
	I1216 07:06:32.762952       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1216 07:06:32.762965       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1216 07:06:32.763103       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1216 07:06:32.763212       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-614518"
	I1216 07:06:32.763278       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-614518-m02"
	I1216 07:06:32.763330       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-614518-m04"
	I1216 07:06:32.763013       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-614518-m04"
	I1216 07:06:32.763628       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1216 07:06:32.767053       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 07:06:32.771968       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 07:06:32.772032       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1216 07:06:32.772041       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1216 07:06:32.772555       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1216 07:06:32.776046       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1216 07:06:32.776052       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1216 07:06:32.786517       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1216 07:06:32.786581       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1216 07:06:32.786621       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1216 07:06:32.786648       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1216 07:06:32.786659       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1216 07:06:32.786673       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1216 07:06:32.798877       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 07:06:32.805236       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1216 07:06:32.809401       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1216 07:06:32.814713       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	
	
	==> kube-controller-manager [95092e298b4a275cf751be03abcd8305d183bb3b40e3bc28150dc77bb5adf478] <==
	I1216 07:05:28.645623       1 serving.go:386] Generated self-signed cert in-memory
	I1216 07:05:29.867986       1 controllermanager.go:191] "Starting" version="v1.34.2"
	I1216 07:05:29.868027       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 07:05:29.870775       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1216 07:05:29.870890       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1216 07:05:29.871397       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1216 07:05:29.871485       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1216 07:05:41.889989       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-proxy [df7febb900c92c1ec552f11013f0ffc72f6a301ff2a34356063a3a3d5508e6f6] <==
	E1216 07:04:12.258444       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-614518&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1216 07:04:21.248820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-614518&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1216 07:04:33.125461       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-614518&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1216 07:04:58.756889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-614518&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1216 07:05:31.552851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-614518&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1216 07:06:20.432877       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 07:06:20.432912       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1216 07:06:20.432992       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 07:06:20.451656       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 07:06:20.451712       1 server_linux.go:132] "Using iptables Proxier"
	I1216 07:06:20.455545       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 07:06:20.455867       1 server.go:527] "Version info" version="v1.34.2"
	I1216 07:06:20.455889       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 07:06:20.457590       1 config.go:200] "Starting service config controller"
	I1216 07:06:20.457611       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 07:06:20.457630       1 config.go:106] "Starting endpoint slice config controller"
	I1216 07:06:20.457635       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 07:06:20.457646       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 07:06:20.457649       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 07:06:20.458370       1 config.go:309] "Starting node config controller"
	I1216 07:06:20.458392       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 07:06:20.458399       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 07:06:20.558370       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1216 07:06:20.558388       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 07:06:20.558421       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c0e9d15ebb1cd884461c491d76b9c135253b28403f1a18a97c1bdb68443fe858] <==
	E1216 07:03:59.819964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1216 07:03:59.820147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1216 07:03:59.820256       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1216 07:03:59.820687       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1216 07:03:59.824783       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1216 07:04:00.679883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1216 07:04:00.726192       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1216 07:04:00.776690       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1216 07:04:00.841266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1216 07:04:00.859797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1216 07:04:00.879356       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1216 07:04:00.886912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1216 07:04:00.919634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1216 07:04:00.958908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1216 07:04:00.959058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1216 07:04:01.001026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1216 07:04:01.006661       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1216 07:04:01.010174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1216 07:04:01.037770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1216 07:04:01.074332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1216 07:04:01.101325       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1216 07:04:01.113105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1216 07:04:01.257180       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1216 07:04:01.380284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1216 07:04:04.392043       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 16 07:05:41 ha-614518 kubelet[805]: E1216 07:05:41.963351     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-614518_kube-system(1520b3299dadf726cb27cf58cec25cd2)\"" pod="kube-system/kube-controller-manager-ha-614518" podUID="1520b3299dadf726cb27cf58cec25cd2"
	Dec 16 07:05:43 ha-614518 kubelet[805]: I1216 07:05:43.172588     805 scope.go:117] "RemoveContainer" containerID="95092e298b4a275cf751be03abcd8305d183bb3b40e3bc28150dc77bb5adf478"
	Dec 16 07:05:43 ha-614518 kubelet[805]: E1216 07:05:43.173239     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-614518_kube-system(1520b3299dadf726cb27cf58cec25cd2)\"" pod="kube-system/kube-controller-manager-ha-614518" podUID="1520b3299dadf726cb27cf58cec25cd2"
	Dec 16 07:05:47 ha-614518 kubelet[805]: I1216 07:05:47.980385     805 scope.go:117] "RemoveContainer" containerID="1b90f35e8fe79482d5c14218f1e2e65c47d65394a6eeb0612fbb2b19206d27c7"
	Dec 16 07:05:47 ha-614518 kubelet[805]: I1216 07:05:47.980741     805 scope.go:117] "RemoveContainer" containerID="4b611d8c213d6b291fb7a3b72450bf97b5b458e31038413638c4e1e9a6beaaf7"
	Dec 16 07:05:47 ha-614518 kubelet[805]: E1216 07:05:47.980882     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c8b9c00b-10bc-423c-b16e-3f3cdb12e907)\"" pod="kube-system/storage-provisioner" podUID="c8b9c00b-10bc-423c-b16e-3f3cdb12e907"
	Dec 16 07:05:51 ha-614518 kubelet[805]: I1216 07:05:51.159869     805 scope.go:117] "RemoveContainer" containerID="95092e298b4a275cf751be03abcd8305d183bb3b40e3bc28150dc77bb5adf478"
	Dec 16 07:05:51 ha-614518 kubelet[805]: E1216 07:05:51.160099     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-614518_kube-system(1520b3299dadf726cb27cf58cec25cd2)\"" pod="kube-system/kube-controller-manager-ha-614518" podUID="1520b3299dadf726cb27cf58cec25cd2"
	Dec 16 07:05:59 ha-614518 kubelet[805]: I1216 07:05:59.515424     805 scope.go:117] "RemoveContainer" containerID="4b611d8c213d6b291fb7a3b72450bf97b5b458e31038413638c4e1e9a6beaaf7"
	Dec 16 07:05:59 ha-614518 kubelet[805]: E1216 07:05:59.516065     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c8b9c00b-10bc-423c-b16e-3f3cdb12e907)\"" pod="kube-system/storage-provisioner" podUID="c8b9c00b-10bc-423c-b16e-3f3cdb12e907"
	Dec 16 07:06:02 ha-614518 kubelet[805]: I1216 07:06:02.518762     805 scope.go:117] "RemoveContainer" containerID="95092e298b4a275cf751be03abcd8305d183bb3b40e3bc28150dc77bb5adf478"
	Dec 16 07:06:02 ha-614518 kubelet[805]: E1216 07:06:02.519407     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-614518_kube-system(1520b3299dadf726cb27cf58cec25cd2)\"" pod="kube-system/kube-controller-manager-ha-614518" podUID="1520b3299dadf726cb27cf58cec25cd2"
	Dec 16 07:06:10 ha-614518 kubelet[805]: I1216 07:06:10.515860     805 scope.go:117] "RemoveContainer" containerID="4b611d8c213d6b291fb7a3b72450bf97b5b458e31038413638c4e1e9a6beaaf7"
	Dec 16 07:06:16 ha-614518 kubelet[805]: I1216 07:06:16.515241     805 scope.go:117] "RemoveContainer" containerID="95092e298b4a275cf751be03abcd8305d183bb3b40e3bc28150dc77bb5adf478"
	Dec 16 07:06:16 ha-614518 kubelet[805]: E1216 07:06:16.515866     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-614518_kube-system(1520b3299dadf726cb27cf58cec25cd2)\"" pod="kube-system/kube-controller-manager-ha-614518" podUID="1520b3299dadf726cb27cf58cec25cd2"
	Dec 16 07:06:28 ha-614518 kubelet[805]: I1216 07:06:28.515045     805 scope.go:117] "RemoveContainer" containerID="95092e298b4a275cf751be03abcd8305d183bb3b40e3bc28150dc77bb5adf478"
	Dec 16 07:06:41 ha-614518 kubelet[805]: I1216 07:06:41.132515     805 scope.go:117] "RemoveContainer" containerID="4b611d8c213d6b291fb7a3b72450bf97b5b458e31038413638c4e1e9a6beaaf7"
	Dec 16 07:06:41 ha-614518 kubelet[805]: I1216 07:06:41.132828     805 scope.go:117] "RemoveContainer" containerID="5fb83a33391310c66121eddbbc2402a4ccfa716619e5d2b9a5e8333c2cbde2fa"
	Dec 16 07:06:41 ha-614518 kubelet[805]: E1216 07:06:41.132959     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c8b9c00b-10bc-423c-b16e-3f3cdb12e907)\"" pod="kube-system/storage-provisioner" podUID="c8b9c00b-10bc-423c-b16e-3f3cdb12e907"
	Dec 16 07:06:52 ha-614518 kubelet[805]: E1216 07:06:52.544364     805 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/1e1bffb0be7696eafc690b57ae72d068d188db906113cb72328c74f36504929d/diff" to get inode usage: stat /var/lib/containers/storage/overlay/1e1bffb0be7696eafc690b57ae72d068d188db906113cb72328c74f36504929d/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_storage-provisioner_c8b9c00b-10bc-423c-b16e-3f3cdb12e907/storage-provisioner/5.log" to get inode usage: stat /var/log/pods/kube-system_storage-provisioner_c8b9c00b-10bc-423c-b16e-3f3cdb12e907/storage-provisioner/5.log: no such file or directory
	Dec 16 07:06:56 ha-614518 kubelet[805]: I1216 07:06:56.515333     805 scope.go:117] "RemoveContainer" containerID="5fb83a33391310c66121eddbbc2402a4ccfa716619e5d2b9a5e8333c2cbde2fa"
	Dec 16 07:06:56 ha-614518 kubelet[805]: E1216 07:06:56.515497     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c8b9c00b-10bc-423c-b16e-3f3cdb12e907)\"" pod="kube-system/storage-provisioner" podUID="c8b9c00b-10bc-423c-b16e-3f3cdb12e907"
	Dec 16 07:07:08 ha-614518 kubelet[805]: I1216 07:07:08.515499     805 scope.go:117] "RemoveContainer" containerID="5fb83a33391310c66121eddbbc2402a4ccfa716619e5d2b9a5e8333c2cbde2fa"
	Dec 16 07:07:08 ha-614518 kubelet[805]: E1216 07:07:08.515687     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c8b9c00b-10bc-423c-b16e-3f3cdb12e907)\"" pod="kube-system/storage-provisioner" podUID="c8b9c00b-10bc-423c-b16e-3f3cdb12e907"
	Dec 16 07:07:21 ha-614518 kubelet[805]: I1216 07:07:21.515729     805 scope.go:117] "RemoveContainer" containerID="5fb83a33391310c66121eddbbc2402a4ccfa716619e5d2b9a5e8333c2cbde2fa"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-614518 -n ha-614518
helpers_test.go:270: (dbg) Run:  kubectl --context ha-614518 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (3.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (4.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.072741273s)
ha_test.go:309: expected profile "ha-614518" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-614518\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-614518\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShar
esRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.2\",\"ClusterName\":\"ha-614518\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"N
ame\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true},{\"Name\":\"m05\",\"IP\":\"192.168.49.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-p
lugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":fals
e,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect ha-614518
helpers_test.go:244: (dbg) docker inspect ha-614518:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e2503ac81b82256526f5aa49d6145c5c534bc177f13530507608bbd038a0fb46",
	        "Created": "2025-12-16T06:55:15.920807949Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1687611,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T07:03:45.310819447Z",
	            "FinishedAt": "2025-12-16T07:03:44.437347575Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/e2503ac81b82256526f5aa49d6145c5c534bc177f13530507608bbd038a0fb46/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e2503ac81b82256526f5aa49d6145c5c534bc177f13530507608bbd038a0fb46/hostname",
	        "HostsPath": "/var/lib/docker/containers/e2503ac81b82256526f5aa49d6145c5c534bc177f13530507608bbd038a0fb46/hosts",
	        "LogPath": "/var/lib/docker/containers/e2503ac81b82256526f5aa49d6145c5c534bc177f13530507608bbd038a0fb46/e2503ac81b82256526f5aa49d6145c5c534bc177f13530507608bbd038a0fb46-json.log",
	        "Name": "/ha-614518",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-614518:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-614518",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e2503ac81b82256526f5aa49d6145c5c534bc177f13530507608bbd038a0fb46",
	                "LowerDir": "/var/lib/docker/overlay2/04f114c45138ebdd19c57b7c35226a13895bf218ac7fbb3e830bb8c8d7681245-init/diff:/var/lib/docker/overlay2/bf9e5e3f04a34ae52d17b5e81aeacb3854428b2bda7b4fcb7e1d86558db759ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/04f114c45138ebdd19c57b7c35226a13895bf218ac7fbb3e830bb8c8d7681245/merged",
	                "UpperDir": "/var/lib/docker/overlay2/04f114c45138ebdd19c57b7c35226a13895bf218ac7fbb3e830bb8c8d7681245/diff",
	                "WorkDir": "/var/lib/docker/overlay2/04f114c45138ebdd19c57b7c35226a13895bf218ac7fbb3e830bb8c8d7681245/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-614518",
	                "Source": "/var/lib/docker/volumes/ha-614518/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-614518",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-614518",
	                "name.minikube.sigs.k8s.io": "ha-614518",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "84d9c6998ba47bdb877c4913d6988c8320c2f46bb6d33489550ea4eb54ae2b9c",
	            "SandboxKey": "/var/run/docker/netns/84d9c6998ba4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34310"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34311"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34314"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34312"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34313"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-614518": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:8c:71:16:ba:ca",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "34c8049a560aca568d8e67043aef245d26603d1e6b5021bc9413fe96f5cfa4f6",
	                    "EndpointID": "128f0ab3a1ff878dc623fde0aadf19698e2b387b41dbec7082d4a76b9a429095",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-614518",
	                        "e2503ac81b82"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-614518 -n ha-614518
helpers_test.go:253: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p ha-614518 logs -n 25: (2.048344273s)
helpers_test.go:261: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-614518 ssh -n ha-614518-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 06:59 UTC │ 16 Dec 25 06:59 UTC │
	│ ssh     │ ha-614518 ssh -n ha-614518-m04 sudo cat /home/docker/cp-test_ha-614518-m03_ha-614518-m04.txt                                         │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 06:59 UTC │ 16 Dec 25 06:59 UTC │
	│ cp      │ ha-614518 cp testdata/cp-test.txt ha-614518-m04:/home/docker/cp-test.txt                                                             │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 06:59 UTC │ 16 Dec 25 06:59 UTC │
	│ ssh     │ ha-614518 ssh -n ha-614518-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 06:59 UTC │ 16 Dec 25 06:59 UTC │
	│ cp      │ ha-614518 cp ha-614518-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1403810740/001/cp-test_ha-614518-m04.txt │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 06:59 UTC │ 16 Dec 25 06:59 UTC │
	│ ssh     │ ha-614518 ssh -n ha-614518-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 06:59 UTC │ 16 Dec 25 06:59 UTC │
	│ cp      │ ha-614518 cp ha-614518-m04:/home/docker/cp-test.txt ha-614518:/home/docker/cp-test_ha-614518-m04_ha-614518.txt                       │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 06:59 UTC │ 16 Dec 25 06:59 UTC │
	│ ssh     │ ha-614518 ssh -n ha-614518-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 06:59 UTC │ 16 Dec 25 06:59 UTC │
	│ ssh     │ ha-614518 ssh -n ha-614518 sudo cat /home/docker/cp-test_ha-614518-m04_ha-614518.txt                                                 │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 06:59 UTC │ 16 Dec 25 06:59 UTC │
	│ cp      │ ha-614518 cp ha-614518-m04:/home/docker/cp-test.txt ha-614518-m02:/home/docker/cp-test_ha-614518-m04_ha-614518-m02.txt               │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 06:59 UTC │ 16 Dec 25 07:00 UTC │
	│ ssh     │ ha-614518 ssh -n ha-614518-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:00 UTC │ 16 Dec 25 07:00 UTC │
	│ ssh     │ ha-614518 ssh -n ha-614518-m02 sudo cat /home/docker/cp-test_ha-614518-m04_ha-614518-m02.txt                                         │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:00 UTC │ 16 Dec 25 07:00 UTC │
	│ cp      │ ha-614518 cp ha-614518-m04:/home/docker/cp-test.txt ha-614518-m03:/home/docker/cp-test_ha-614518-m04_ha-614518-m03.txt               │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:00 UTC │ 16 Dec 25 07:00 UTC │
	│ ssh     │ ha-614518 ssh -n ha-614518-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:00 UTC │ 16 Dec 25 07:00 UTC │
	│ ssh     │ ha-614518 ssh -n ha-614518-m03 sudo cat /home/docker/cp-test_ha-614518-m04_ha-614518-m03.txt                                         │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:00 UTC │ 16 Dec 25 07:00 UTC │
	│ node    │ ha-614518 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:00 UTC │ 16 Dec 25 07:00 UTC │
	│ node    │ ha-614518 node start m02 --alsologtostderr -v 5                                                                                      │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:00 UTC │ 16 Dec 25 07:00 UTC │
	│ node    │ ha-614518 node list --alsologtostderr -v 5                                                                                           │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:00 UTC │                     │
	│ stop    │ ha-614518 stop --alsologtostderr -v 5                                                                                                │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:00 UTC │ 16 Dec 25 07:01 UTC │
	│ start   │ ha-614518 start --wait true --alsologtostderr -v 5                                                                                   │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:01 UTC │ 16 Dec 25 07:02 UTC │
	│ node    │ ha-614518 node list --alsologtostderr -v 5                                                                                           │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:02 UTC │                     │
	│ node    │ ha-614518 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:02 UTC │ 16 Dec 25 07:03 UTC │
	│ stop    │ ha-614518 stop --alsologtostderr -v 5                                                                                                │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:03 UTC │ 16 Dec 25 07:03 UTC │
	│ start   │ ha-614518 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:03 UTC │                     │
	│ node    │ ha-614518 node add --control-plane --alsologtostderr -v 5                                                                            │ ha-614518 │ jenkins │ v1.37.0 │ 16 Dec 25 07:10 UTC │ 16 Dec 25 07:11 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 07:03:44
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 07:03:44.880217 1687487 out.go:360] Setting OutFile to fd 1 ...
	I1216 07:03:44.880366 1687487 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 07:03:44.880378 1687487 out.go:374] Setting ErrFile to fd 2...
	I1216 07:03:44.880384 1687487 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 07:03:44.880665 1687487 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 07:03:44.881079 1687487 out.go:368] Setting JSON to false
	I1216 07:03:44.882032 1687487 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":35176,"bootTime":1765833449,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1216 07:03:44.882105 1687487 start.go:143] virtualization:  
	I1216 07:03:44.885307 1687487 out.go:179] * [ha-614518] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 07:03:44.889019 1687487 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 07:03:44.889105 1687487 notify.go:221] Checking for updates...
	I1216 07:03:44.894878 1687487 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 07:03:44.897985 1687487 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 07:03:44.900761 1687487 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube
	I1216 07:03:44.903578 1687487 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 07:03:44.906467 1687487 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 07:03:44.909985 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:03:44.910567 1687487 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 07:03:44.945233 1687487 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 07:03:44.945374 1687487 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 07:03:45.031657 1687487 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-16 07:03:45.011244188 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 07:03:45.031829 1687487 docker.go:319] overlay module found
	I1216 07:03:45.037435 1687487 out.go:179] * Using the docker driver based on existing profile
	I1216 07:03:45.040996 1687487 start.go:309] selected driver: docker
	I1216 07:03:45.041023 1687487 start.go:927] validating driver "docker" against &{Name:ha-614518 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-614518 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 07:03:45.041175 1687487 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 07:03:45.041288 1687487 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 07:03:45.134661 1687487 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-16 07:03:45.119026433 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 07:03:45.135091 1687487 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 07:03:45.135120 1687487 cni.go:84] Creating CNI manager for ""
	I1216 07:03:45.135176 1687487 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1216 07:03:45.135234 1687487 start.go:353] cluster config:
	{Name:ha-614518 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-614518 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 07:03:45.149972 1687487 out.go:179] * Starting "ha-614518" primary control-plane node in "ha-614518" cluster
	I1216 07:03:45.153136 1687487 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 07:03:45.159266 1687487 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 07:03:45.170928 1687487 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 07:03:45.170953 1687487 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 07:03:45.171004 1687487 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1216 07:03:45.171018 1687487 cache.go:65] Caching tarball of preloaded images
	I1216 07:03:45.171117 1687487 preload.go:238] Found /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 07:03:45.171128 1687487 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 07:03:45.171285 1687487 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/config.json ...
	I1216 07:03:45.215544 1687487 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 07:03:45.215626 1687487 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 07:03:45.215662 1687487 cache.go:243] Successfully downloaded all kic artifacts
	I1216 07:03:45.215843 1687487 start.go:360] acquireMachinesLock for ha-614518: {Name:mk3b1063af1f3d64814d71b86469148e674fab2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 07:03:45.216121 1687487 start.go:364] duration metric: took 138.127µs to acquireMachinesLock for "ha-614518"
	I1216 07:03:45.216289 1687487 start.go:96] Skipping create...Using existing machine configuration
	I1216 07:03:45.216367 1687487 fix.go:54] fixHost starting: 
	I1216 07:03:45.217861 1687487 cli_runner.go:164] Run: docker container inspect ha-614518 --format={{.State.Status}}
	I1216 07:03:45.257760 1687487 fix.go:112] recreateIfNeeded on ha-614518: state=Stopped err=<nil>
	W1216 07:03:45.257825 1687487 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 07:03:45.263736 1687487 out.go:252] * Restarting existing docker container for "ha-614518" ...
	I1216 07:03:45.263878 1687487 cli_runner.go:164] Run: docker start ha-614518
	I1216 07:03:45.543794 1687487 cli_runner.go:164] Run: docker container inspect ha-614518 --format={{.State.Status}}
	I1216 07:03:45.563314 1687487 kic.go:430] container "ha-614518" state is running.
	I1216 07:03:45.563689 1687487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614518
	I1216 07:03:45.584894 1687487 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/config.json ...
	I1216 07:03:45.585139 1687487 machine.go:94] provisionDockerMachine start ...
	I1216 07:03:45.585210 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518
	I1216 07:03:45.605415 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:03:45.606022 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34310 <nil> <nil>}
	I1216 07:03:45.606037 1687487 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 07:03:45.607343 1687487 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36692->127.0.0.1:34310: read: connection reset by peer
	I1216 07:03:48.740166 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-614518
	
	I1216 07:03:48.740200 1687487 ubuntu.go:182] provisioning hostname "ha-614518"
	I1216 07:03:48.740337 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518
	I1216 07:03:48.763945 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:03:48.764266 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34310 <nil> <nil>}
	I1216 07:03:48.764282 1687487 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-614518 && echo "ha-614518" | sudo tee /etc/hostname
	I1216 07:03:48.905449 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-614518
	
	I1216 07:03:48.905536 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518
	I1216 07:03:48.922159 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:03:48.922475 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34310 <nil> <nil>}
	I1216 07:03:48.922498 1687487 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-614518' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-614518/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-614518' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 07:03:49.056835 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 07:03:49.056862 1687487 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-1596013/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-1596013/.minikube}
	I1216 07:03:49.056897 1687487 ubuntu.go:190] setting up certificates
	I1216 07:03:49.056913 1687487 provision.go:84] configureAuth start
	I1216 07:03:49.056990 1687487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614518
	I1216 07:03:49.074475 1687487 provision.go:143] copyHostCerts
	I1216 07:03:49.074521 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 07:03:49.074564 1687487 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem, removing ...
	I1216 07:03:49.074584 1687487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 07:03:49.074664 1687487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem (1078 bytes)
	I1216 07:03:49.074753 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 07:03:49.074776 1687487 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem, removing ...
	I1216 07:03:49.074785 1687487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 07:03:49.074812 1687487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem (1123 bytes)
	I1216 07:03:49.074873 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 07:03:49.074892 1687487 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem, removing ...
	I1216 07:03:49.074902 1687487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 07:03:49.074929 1687487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem (1675 bytes)
	I1216 07:03:49.074985 1687487 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem org=jenkins.ha-614518 san=[127.0.0.1 192.168.49.2 ha-614518 localhost minikube]
	I1216 07:03:49.677070 1687487 provision.go:177] copyRemoteCerts
	I1216 07:03:49.677146 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 07:03:49.677189 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518
	I1216 07:03:49.696012 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34310 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518/id_rsa Username:docker}
	I1216 07:03:49.796234 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1216 07:03:49.796294 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 07:03:49.813987 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1216 07:03:49.814051 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1216 07:03:49.832994 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1216 07:03:49.833117 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 07:03:49.852358 1687487 provision.go:87] duration metric: took 795.417685ms to configureAuth
	I1216 07:03:49.852395 1687487 ubuntu.go:206] setting minikube options for container-runtime
	I1216 07:03:49.852668 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:03:49.852778 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518
	I1216 07:03:49.870814 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:03:49.871144 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34310 <nil> <nil>}
	I1216 07:03:49.871168 1687487 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 07:03:50.263536 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 07:03:50.263563 1687487 machine.go:97] duration metric: took 4.678406656s to provisionDockerMachine
	I1216 07:03:50.263587 1687487 start.go:293] postStartSetup for "ha-614518" (driver="docker")
	I1216 07:03:50.263599 1687487 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 07:03:50.263688 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 07:03:50.263741 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518
	I1216 07:03:50.288161 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34310 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518/id_rsa Username:docker}
	I1216 07:03:50.388424 1687487 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 07:03:50.391627 1687487 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 07:03:50.391661 1687487 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 07:03:50.391673 1687487 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/addons for local assets ...
	I1216 07:03:50.391729 1687487 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/files for local assets ...
	I1216 07:03:50.391823 1687487 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> 15992552.pem in /etc/ssl/certs
	I1216 07:03:50.391835 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> /etc/ssl/certs/15992552.pem
	I1216 07:03:50.391942 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 07:03:50.399136 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 07:03:50.417106 1687487 start.go:296] duration metric: took 153.503323ms for postStartSetup
	I1216 07:03:50.417188 1687487 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 07:03:50.417231 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518
	I1216 07:03:50.433965 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34310 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518/id_rsa Username:docker}
	I1216 07:03:50.525944 1687487 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 07:03:50.531286 1687487 fix.go:56] duration metric: took 5.314914646s for fixHost
	I1216 07:03:50.531388 1687487 start.go:83] releasing machines lock for "ha-614518", held for 5.315142989s
	I1216 07:03:50.531501 1687487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614518
	I1216 07:03:50.548584 1687487 ssh_runner.go:195] Run: cat /version.json
	I1216 07:03:50.548651 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518
	I1216 07:03:50.548722 1687487 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 07:03:50.548786 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518
	I1216 07:03:50.573896 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34310 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518/id_rsa Username:docker}
	I1216 07:03:50.582211 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34310 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518/id_rsa Username:docker}
	I1216 07:03:50.773920 1687487 ssh_runner.go:195] Run: systemctl --version
	I1216 07:03:50.780399 1687487 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 07:03:50.815666 1687487 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 07:03:50.820120 1687487 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 07:03:50.820193 1687487 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 07:03:50.828039 1687487 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 07:03:50.828121 1687487 start.go:496] detecting cgroup driver to use...
	I1216 07:03:50.828169 1687487 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 07:03:50.828249 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 07:03:50.844121 1687487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 07:03:50.857243 1687487 docker.go:218] disabling cri-docker service (if available) ...
	I1216 07:03:50.857381 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 07:03:50.873095 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 07:03:50.886187 1687487 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 07:03:51.006275 1687487 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 07:03:51.140914 1687487 docker.go:234] disabling docker service ...
	I1216 07:03:51.140991 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 07:03:51.157238 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 07:03:51.171898 1687487 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 07:03:51.287675 1687487 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 07:03:51.421310 1687487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 07:03:51.434905 1687487 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 07:03:51.449226 1687487 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 07:03:51.449297 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:03:51.458120 1687487 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 07:03:51.458190 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:03:51.467336 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:03:51.476031 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:03:51.484943 1687487 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 07:03:51.493309 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:03:51.502592 1687487 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:03:51.511462 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:03:51.520904 1687487 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 07:03:51.528691 1687487 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 07:03:51.536073 1687487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 07:03:51.644582 1687487 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 07:03:51.813587 1687487 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 07:03:51.813682 1687487 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 07:03:51.818257 1687487 start.go:564] Will wait 60s for crictl version
	I1216 07:03:51.818378 1687487 ssh_runner.go:195] Run: which crictl
	I1216 07:03:51.822136 1687487 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 07:03:51.848811 1687487 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 07:03:51.848971 1687487 ssh_runner.go:195] Run: crio --version
	I1216 07:03:51.877270 1687487 ssh_runner.go:195] Run: crio --version
	I1216 07:03:51.911920 1687487 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 07:03:51.914805 1687487 cli_runner.go:164] Run: docker network inspect ha-614518 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 07:03:51.931261 1687487 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 07:03:51.935082 1687487 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 07:03:51.945205 1687487 kubeadm.go:884] updating cluster {Name:ha-614518 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-614518 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 07:03:51.945357 1687487 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 07:03:51.945422 1687487 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 07:03:51.979077 1687487 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 07:03:51.979106 1687487 crio.go:433] Images already preloaded, skipping extraction
	I1216 07:03:51.979163 1687487 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 07:03:52.008543 1687487 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 07:03:52.008569 1687487 cache_images.go:86] Images are preloaded, skipping loading
	I1216 07:03:52.008578 1687487 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1216 07:03:52.008687 1687487 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-614518 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-614518 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 07:03:52.008783 1687487 ssh_runner.go:195] Run: crio config
	I1216 07:03:52.064647 1687487 cni.go:84] Creating CNI manager for ""
	I1216 07:03:52.064671 1687487 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1216 07:03:52.064694 1687487 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 07:03:52.064717 1687487 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-614518 NodeName:ha-614518 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 07:03:52.064852 1687487 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-614518"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 07:03:52.064876 1687487 kube-vip.go:115] generating kube-vip config ...
	I1216 07:03:52.064936 1687487 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1216 07:03:52.077257 1687487 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1216 07:03:52.077367 1687487 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1216 07:03:52.077440 1687487 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 07:03:52.085615 1687487 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 07:03:52.085717 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1216 07:03:52.093632 1687487 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1216 07:03:52.107221 1687487 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 07:03:52.120189 1687487 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1216 07:03:52.132971 1687487 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1216 07:03:52.145766 1687487 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1216 07:03:52.149312 1687487 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 07:03:52.158923 1687487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 07:03:52.283710 1687487 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 07:03:52.301582 1687487 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518 for IP: 192.168.49.2
	I1216 07:03:52.301603 1687487 certs.go:195] generating shared ca certs ...
	I1216 07:03:52.301620 1687487 certs.go:227] acquiring lock for ca certs: {Name:mkbf72d2e438185e2867d262e148d82e5455cccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:03:52.301773 1687487 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key
	I1216 07:03:52.301822 1687487 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key
	I1216 07:03:52.301833 1687487 certs.go:257] generating profile certs ...
	I1216 07:03:52.301907 1687487 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/client.key
	I1216 07:03:52.301945 1687487 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.key.d39b37a1
	I1216 07:03:52.301963 1687487 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.crt.d39b37a1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1216 07:03:52.415504 1687487 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.crt.d39b37a1 ...
	I1216 07:03:52.415537 1687487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.crt.d39b37a1: {Name:mk670a19d587f16baf0df889e9e917056f8f5261 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:03:52.415731 1687487 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.key.d39b37a1 ...
	I1216 07:03:52.415747 1687487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.key.d39b37a1: {Name:mk54bea57dae6ed1500bec8bfd5028c4fbd13a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:03:52.415839 1687487 certs.go:382] copying /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.crt.d39b37a1 -> /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.crt
	I1216 07:03:52.415977 1687487 certs.go:386] copying /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.key.d39b37a1 -> /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.key
	I1216 07:03:52.416116 1687487 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/proxy-client.key
	I1216 07:03:52.416135 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1216 07:03:52.416152 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1216 07:03:52.416168 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1216 07:03:52.416186 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1216 07:03:52.416197 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1216 07:03:52.416215 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1216 07:03:52.416235 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1216 07:03:52.416253 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1216 07:03:52.416304 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem (1338 bytes)
	W1216 07:03:52.416340 1687487 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255_empty.pem, impossibly tiny 0 bytes
	I1216 07:03:52.416355 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 07:03:52.416384 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem (1078 bytes)
	I1216 07:03:52.416413 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem (1123 bytes)
	I1216 07:03:52.416440 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem (1675 bytes)
	I1216 07:03:52.416515 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 07:03:52.416550 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:03:52.416569 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem -> /usr/share/ca-certificates/1599255.pem
	I1216 07:03:52.416583 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> /usr/share/ca-certificates/15992552.pem
	I1216 07:03:52.417145 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 07:03:52.438246 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 07:03:52.458550 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 07:03:52.483806 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 07:03:52.504536 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1216 07:03:52.531165 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 07:03:52.551893 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 07:03:52.571589 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 07:03:52.590649 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 07:03:52.610138 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem --> /usr/share/ca-certificates/1599255.pem (1338 bytes)
	I1216 07:03:52.630965 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /usr/share/ca-certificates/15992552.pem (1708 bytes)
	I1216 07:03:52.650790 1687487 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 07:03:52.664186 1687487 ssh_runner.go:195] Run: openssl version
	I1216 07:03:52.671337 1687487 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:03:52.678844 1687487 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 07:03:52.686401 1687487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:03:52.690368 1687487 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:03:52.690436 1687487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:03:52.731470 1687487 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 07:03:52.738706 1687487 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1599255.pem
	I1216 07:03:52.745967 1687487 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1599255.pem /etc/ssl/certs/1599255.pem
	I1216 07:03:52.753284 1687487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1599255.pem
	I1216 07:03:52.757015 1687487 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 06:24 /usr/share/ca-certificates/1599255.pem
	I1216 07:03:52.757119 1687487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1599255.pem
	I1216 07:03:52.798254 1687487 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 07:03:52.805456 1687487 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15992552.pem
	I1216 07:03:52.812464 1687487 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15992552.pem /etc/ssl/certs/15992552.pem
	I1216 07:03:52.820202 1687487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15992552.pem
	I1216 07:03:52.823851 1687487 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 06:24 /usr/share/ca-certificates/15992552.pem
	I1216 07:03:52.823958 1687487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15992552.pem
	I1216 07:03:52.864891 1687487 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 07:03:52.872666 1687487 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 07:03:52.876565 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 07:03:52.917593 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 07:03:52.962371 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 07:03:53.011634 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 07:03:53.070012 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 07:03:53.127584 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 07:03:53.215856 1687487 kubeadm.go:401] StartCluster: {Name:ha-614518 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-614518 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 07:03:53.216035 1687487 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 07:03:53.216134 1687487 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 07:03:53.263680 1687487 cri.go:89] found id: "11e4b44d62d5436a07f6d8edd733f4092c09af04d3fa6130a9ee2d504c2d7b92"
	I1216 07:03:53.263744 1687487 cri.go:89] found id: "69514719ce90eebffbe68b0ace74e14259ceea7c07980c6918b6af6e8b91ba10"
	I1216 07:03:53.263764 1687487 cri.go:89] found id: "b6e4d702970e634028ab9da9ca8e258d02bb0aa908a74a428d72bd35cdec320d"
	I1216 07:03:53.263787 1687487 cri.go:89] found id: "c0e9d15ebb1cd884461c491d76b9c135253b28403f1a18a97c1bdb68443fe858"
	I1216 07:03:53.263822 1687487 cri.go:89] found id: "db591d0d437f81b8c65552b6efbd2ca8fb29bb1e0989d62b2cce8be69b46105c"
	I1216 07:03:53.263846 1687487 cri.go:89] found id: ""
	I1216 07:03:53.263924 1687487 ssh_runner.go:195] Run: sudo runc list -f json
	W1216 07:03:53.279629 1687487 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T07:03:53Z" level=error msg="open /run/runc: no such file or directory"
	I1216 07:03:53.279752 1687487 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 07:03:53.291564 1687487 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 07:03:53.291626 1687487 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 07:03:53.291717 1687487 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 07:03:53.306008 1687487 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 07:03:53.306492 1687487 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-614518" does not appear in /home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 07:03:53.306648 1687487 kubeconfig.go:62] /home/jenkins/minikube-integration/22141-1596013/kubeconfig needs updating (will repair): [kubeconfig missing "ha-614518" cluster setting kubeconfig missing "ha-614518" context setting]
	I1216 07:03:53.306941 1687487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/kubeconfig: {Name:mk61a8e87d869d27c5acc78145bae6b02a8088a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:03:53.307502 1687487 kapi.go:59] client config for ha-614518: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/client.crt", KeyFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/client.key", CAFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 07:03:53.308322 1687487 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1216 07:03:53.308427 1687487 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 07:03:53.308488 1687487 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1216 07:03:53.308515 1687487 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1216 07:03:53.308406 1687487 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1216 07:03:53.308623 1687487 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 07:03:53.308936 1687487 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 07:03:53.317737 1687487 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1216 07:03:53.317797 1687487 kubeadm.go:602] duration metric: took 26.14434ms to restartPrimaryControlPlane
	I1216 07:03:53.317823 1687487 kubeadm.go:403] duration metric: took 101.97493ms to StartCluster
	I1216 07:03:53.317854 1687487 settings.go:142] acquiring lock: {Name:mk011eec7aa10b3db81dce3dc7edf51f985e2ce2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:03:53.317948 1687487 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 07:03:53.318556 1687487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/kubeconfig: {Name:mk61a8e87d869d27c5acc78145bae6b02a8088a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:03:53.318810 1687487 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 07:03:53.318859 1687487 start.go:242] waiting for startup goroutines ...
	I1216 07:03:53.318894 1687487 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 07:03:53.319377 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:03:53.323257 1687487 out.go:179] * Enabled addons: 
	I1216 07:03:53.326246 1687487 addons.go:530] duration metric: took 7.35197ms for enable addons: enabled=[]
	I1216 07:03:53.326324 1687487 start.go:247] waiting for cluster config update ...
	I1216 07:03:53.326358 1687487 start.go:256] writing updated cluster config ...
	I1216 07:03:53.329613 1687487 out.go:203] 
	I1216 07:03:53.332888 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:03:53.333052 1687487 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/config.json ...
	I1216 07:03:53.336576 1687487 out.go:179] * Starting "ha-614518-m02" control-plane node in "ha-614518" cluster
	I1216 07:03:53.339553 1687487 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 07:03:53.342482 1687487 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 07:03:53.345454 1687487 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 07:03:53.345546 1687487 cache.go:65] Caching tarball of preloaded images
	I1216 07:03:53.345514 1687487 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 07:03:53.345877 1687487 preload.go:238] Found /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 07:03:53.345913 1687487 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 07:03:53.346063 1687487 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/config.json ...
	I1216 07:03:53.363377 1687487 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 07:03:53.363397 1687487 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 07:03:53.363414 1687487 cache.go:243] Successfully downloaded all kic artifacts
	I1216 07:03:53.363438 1687487 start.go:360] acquireMachinesLock for ha-614518-m02: {Name:mka615bda267fcf7df6d6dfdc68cac769a75315d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 07:03:53.363497 1687487 start.go:364] duration metric: took 36.119µs to acquireMachinesLock for "ha-614518-m02"
	I1216 07:03:53.363523 1687487 start.go:96] Skipping create...Using existing machine configuration
	I1216 07:03:53.363534 1687487 fix.go:54] fixHost starting: m02
	I1216 07:03:53.363791 1687487 cli_runner.go:164] Run: docker container inspect ha-614518-m02 --format={{.State.Status}}
	I1216 07:03:53.383383 1687487 fix.go:112] recreateIfNeeded on ha-614518-m02: state=Stopped err=<nil>
	W1216 07:03:53.383415 1687487 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 07:03:53.386537 1687487 out.go:252] * Restarting existing docker container for "ha-614518-m02" ...
	I1216 07:03:53.386636 1687487 cli_runner.go:164] Run: docker start ha-614518-m02
	I1216 07:03:53.794943 1687487 cli_runner.go:164] Run: docker container inspect ha-614518-m02 --format={{.State.Status}}
	I1216 07:03:53.822138 1687487 kic.go:430] container "ha-614518-m02" state is running.
	I1216 07:03:53.822535 1687487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614518-m02
	I1216 07:03:53.851090 1687487 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/config.json ...
	I1216 07:03:53.851356 1687487 machine.go:94] provisionDockerMachine start ...
	I1216 07:03:53.851426 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m02
	I1216 07:03:53.878317 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:03:53.878677 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34315 <nil> <nil>}
	I1216 07:03:53.878696 1687487 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 07:03:53.879342 1687487 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1216 07:03:57.124004 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-614518-m02
	
	I1216 07:03:57.124068 1687487 ubuntu.go:182] provisioning hostname "ha-614518-m02"
	I1216 07:03:57.124164 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m02
	I1216 07:03:57.173735 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:03:57.174061 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34315 <nil> <nil>}
	I1216 07:03:57.174078 1687487 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-614518-m02 && echo "ha-614518-m02" | sudo tee /etc/hostname
	I1216 07:03:57.438628 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-614518-m02
	
	I1216 07:03:57.438749 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m02
	I1216 07:03:57.472722 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:03:57.473050 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34315 <nil> <nil>}
	I1216 07:03:57.473073 1687487 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-614518-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-614518-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-614518-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 07:03:57.677870 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 07:03:57.677921 1687487 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-1596013/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-1596013/.minikube}
	I1216 07:03:57.677946 1687487 ubuntu.go:190] setting up certificates
	I1216 07:03:57.677958 1687487 provision.go:84] configureAuth start
	I1216 07:03:57.678055 1687487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614518-m02
	I1216 07:03:57.722106 1687487 provision.go:143] copyHostCerts
	I1216 07:03:57.722151 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 07:03:57.722185 1687487 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem, removing ...
	I1216 07:03:57.722198 1687487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 07:03:57.722276 1687487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem (1078 bytes)
	I1216 07:03:57.722357 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 07:03:57.722379 1687487 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem, removing ...
	I1216 07:03:57.722388 1687487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 07:03:57.722421 1687487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem (1123 bytes)
	I1216 07:03:57.722465 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 07:03:57.722489 1687487 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem, removing ...
	I1216 07:03:57.722498 1687487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 07:03:57.722529 1687487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem (1675 bytes)
	I1216 07:03:57.722633 1687487 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem org=jenkins.ha-614518-m02 san=[127.0.0.1 192.168.49.3 ha-614518-m02 localhost minikube]
	I1216 07:03:57.844425 1687487 provision.go:177] copyRemoteCerts
	I1216 07:03:57.844504 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 07:03:57.844548 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m02
	I1216 07:03:57.862917 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34315 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518-m02/id_rsa Username:docker}
	I1216 07:03:57.972376 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1216 07:03:57.972445 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 07:03:58.017243 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1216 07:03:58.017311 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1216 07:03:58.059767 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1216 07:03:58.059828 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 07:03:58.113177 1687487 provision.go:87] duration metric: took 435.20178ms to configureAuth
	I1216 07:03:58.113246 1687487 ubuntu.go:206] setting minikube options for container-runtime
	I1216 07:03:58.113513 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:03:58.113663 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m02
	I1216 07:03:58.142721 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:03:58.143019 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34315 <nil> <nil>}
	I1216 07:03:58.143032 1687487 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 07:03:59.702077 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 07:03:59.702157 1687487 machine.go:97] duration metric: took 5.850782021s to provisionDockerMachine
	I1216 07:03:59.702183 1687487 start.go:293] postStartSetup for "ha-614518-m02" (driver="docker")
	I1216 07:03:59.702253 1687487 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 07:03:59.702337 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 07:03:59.702409 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m02
	I1216 07:03:59.738247 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34315 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518-m02/id_rsa Username:docker}
	I1216 07:03:59.855085 1687487 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 07:03:59.858756 1687487 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 07:03:59.858785 1687487 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 07:03:59.858797 1687487 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/addons for local assets ...
	I1216 07:03:59.858854 1687487 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/files for local assets ...
	I1216 07:03:59.858930 1687487 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> 15992552.pem in /etc/ssl/certs
	I1216 07:03:59.858937 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> /etc/ssl/certs/15992552.pem
	I1216 07:03:59.859038 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 07:03:59.868409 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 07:03:59.890719 1687487 start.go:296] duration metric: took 188.504339ms for postStartSetup
	I1216 07:03:59.890855 1687487 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 07:03:59.890922 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m02
	I1216 07:03:59.909691 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34315 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518-m02/id_rsa Username:docker}
	I1216 07:04:00.010830 1687487 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 07:04:00.053896 1687487 fix.go:56] duration metric: took 6.690353109s for fixHost
	I1216 07:04:00.053984 1687487 start.go:83] releasing machines lock for "ha-614518-m02", held for 6.690472315s
	I1216 07:04:00.054132 1687487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614518-m02
	I1216 07:04:00.100321 1687487 out.go:179] * Found network options:
	I1216 07:04:00.105391 1687487 out.go:179]   - NO_PROXY=192.168.49.2
	W1216 07:04:00.108450 1687487 proxy.go:120] fail to check proxy env: Error ip not in block
	W1216 07:04:00.108636 1687487 proxy.go:120] fail to check proxy env: Error ip not in block
	I1216 07:04:00.108742 1687487 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 07:04:00.108814 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m02
	I1216 07:04:00.109177 1687487 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 07:04:00.115341 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m02
	I1216 07:04:00.165700 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34315 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518-m02/id_rsa Username:docker}
	I1216 07:04:00.232046 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34315 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518-m02/id_rsa Username:docker}
	I1216 07:04:00.645936 1687487 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 07:04:00.658871 1687487 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 07:04:00.658994 1687487 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 07:04:00.687970 1687487 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 07:04:00.688053 1687487 start.go:496] detecting cgroup driver to use...
	I1216 07:04:00.688101 1687487 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 07:04:00.688186 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 07:04:00.715577 1687487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 07:04:00.751617 1687487 docker.go:218] disabling cri-docker service (if available) ...
	I1216 07:04:00.751681 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 07:04:00.778303 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 07:04:00.802164 1687487 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 07:04:01.047882 1687487 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 07:04:01.301807 1687487 docker.go:234] disabling docker service ...
	I1216 07:04:01.301880 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 07:04:01.322236 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 07:04:01.348117 1687487 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 07:04:01.593311 1687487 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 07:04:01.834030 1687487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 07:04:01.858526 1687487 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 07:04:01.886506 1687487 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 07:04:01.886622 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:04:01.922317 1687487 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 07:04:01.922463 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:04:01.953232 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:04:01.971302 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:04:01.993804 1687487 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 07:04:02.013934 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:04:02.031424 1687487 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:04:02.046246 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:04:02.066027 1687487 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 07:04:02.080394 1687487 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 07:04:02.095283 1687487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 07:04:02.419550 1687487 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 07:05:32.857802 1687487 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.438149921s)
	I1216 07:05:32.857827 1687487 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 07:05:32.857897 1687487 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 07:05:32.861796 1687487 start.go:564] Will wait 60s for crictl version
	I1216 07:05:32.861879 1687487 ssh_runner.go:195] Run: which crictl
	I1216 07:05:32.865559 1687487 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 07:05:32.893251 1687487 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 07:05:32.893334 1687487 ssh_runner.go:195] Run: crio --version
	I1216 07:05:32.921229 1687487 ssh_runner.go:195] Run: crio --version
	I1216 07:05:32.960111 1687487 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 07:05:32.963074 1687487 out.go:179]   - env NO_PROXY=192.168.49.2
	I1216 07:05:32.965965 1687487 cli_runner.go:164] Run: docker network inspect ha-614518 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 07:05:32.983713 1687487 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 07:05:32.988187 1687487 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 07:05:32.998448 1687487 mustload.go:66] Loading cluster: ha-614518
	I1216 07:05:32.998787 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:05:32.999107 1687487 cli_runner.go:164] Run: docker container inspect ha-614518 --format={{.State.Status}}
	I1216 07:05:33.020295 1687487 host.go:66] Checking if "ha-614518" exists ...
	I1216 07:05:33.020623 1687487 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518 for IP: 192.168.49.3
	I1216 07:05:33.020635 1687487 certs.go:195] generating shared ca certs ...
	I1216 07:05:33.020650 1687487 certs.go:227] acquiring lock for ca certs: {Name:mkbf72d2e438185e2867d262e148d82e5455cccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:05:33.020784 1687487 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key
	I1216 07:05:33.020838 1687487 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key
	I1216 07:05:33.020847 1687487 certs.go:257] generating profile certs ...
	I1216 07:05:33.020922 1687487 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/client.key
	I1216 07:05:33.020982 1687487 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.key.10d34f0f
	I1216 07:05:33.021018 1687487 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/proxy-client.key
	I1216 07:05:33.021037 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1216 07:05:33.021050 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1216 07:05:33.021075 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1216 07:05:33.021088 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1216 07:05:33.021102 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1216 07:05:33.021114 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1216 07:05:33.021125 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1216 07:05:33.021135 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1216 07:05:33.021191 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem (1338 bytes)
	W1216 07:05:33.021222 1687487 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255_empty.pem, impossibly tiny 0 bytes
	I1216 07:05:33.021230 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 07:05:33.021255 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem (1078 bytes)
	I1216 07:05:33.021279 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem (1123 bytes)
	I1216 07:05:33.021303 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem (1675 bytes)
	I1216 07:05:33.021363 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 07:05:33.021393 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:05:33.021405 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem -> /usr/share/ca-certificates/1599255.pem
	I1216 07:05:33.021415 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> /usr/share/ca-certificates/15992552.pem
	I1216 07:05:33.021480 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518
	I1216 07:05:33.040303 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34310 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518/id_rsa Username:docker}
	I1216 07:05:33.132825 1687487 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1216 07:05:33.136811 1687487 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1216 07:05:33.145267 1687487 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1216 07:05:33.148926 1687487 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1216 07:05:33.157749 1687487 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1216 07:05:33.161324 1687487 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1216 07:05:33.170007 1687487 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1216 07:05:33.174232 1687487 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1216 07:05:33.182495 1687487 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1216 07:05:33.186607 1687487 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1216 07:05:33.194939 1687487 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1216 07:05:33.198815 1687487 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1216 07:05:33.207734 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 07:05:33.226981 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 07:05:33.246475 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 07:05:33.265061 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 07:05:33.284210 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1216 07:05:33.306195 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 07:05:33.324956 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 07:05:33.343476 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 07:05:33.361548 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 07:05:33.380428 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem --> /usr/share/ca-certificates/1599255.pem (1338 bytes)
	I1216 07:05:33.398886 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /usr/share/ca-certificates/15992552.pem (1708 bytes)
	I1216 07:05:33.416891 1687487 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1216 07:05:33.430017 1687487 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1216 07:05:33.442986 1687487 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1216 07:05:33.456178 1687487 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1216 07:05:33.469704 1687487 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1216 07:05:33.484299 1687487 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1216 07:05:33.499729 1687487 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1216 07:05:33.516041 1687487 ssh_runner.go:195] Run: openssl version
	I1216 07:05:33.524362 1687487 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1599255.pem
	I1216 07:05:33.532162 1687487 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1599255.pem /etc/ssl/certs/1599255.pem
	I1216 07:05:33.540324 1687487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1599255.pem
	I1216 07:05:33.544918 1687487 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 06:24 /usr/share/ca-certificates/1599255.pem
	I1216 07:05:33.544995 1687487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1599255.pem
	I1216 07:05:33.585992 1687487 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 07:05:33.593625 1687487 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15992552.pem
	I1216 07:05:33.601101 1687487 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15992552.pem /etc/ssl/certs/15992552.pem
	I1216 07:05:33.608445 1687487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15992552.pem
	I1216 07:05:33.613481 1687487 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 06:24 /usr/share/ca-certificates/15992552.pem
	I1216 07:05:33.613546 1687487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15992552.pem
	I1216 07:05:33.656579 1687487 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 07:05:33.664104 1687487 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:05:33.671624 1687487 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 07:05:33.679463 1687487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:05:33.683654 1687487 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:05:33.683720 1687487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:05:33.725052 1687487 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 07:05:33.733624 1687487 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 07:05:33.737572 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 07:05:33.781425 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 07:05:33.824276 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 07:05:33.865794 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 07:05:33.909050 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 07:05:33.951953 1687487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 07:05:33.993867 1687487 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.2 crio true true} ...
	I1216 07:05:33.993976 1687487 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-614518-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-614518 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 07:05:33.994007 1687487 kube-vip.go:115] generating kube-vip config ...
	I1216 07:05:33.994059 1687487 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1216 07:05:34.009409 1687487 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1216 07:05:34.009486 1687487 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1216 07:05:34.009582 1687487 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 07:05:34.018576 1687487 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 07:05:34.018674 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1216 07:05:34.027410 1687487 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1216 07:05:34.042363 1687487 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 07:05:34.056182 1687487 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1216 07:05:34.074014 1687487 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1216 07:05:34.077990 1687487 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 07:05:34.088295 1687487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 07:05:34.232095 1687487 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 07:05:34.247231 1687487 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 07:05:34.247603 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:05:34.253170 1687487 out.go:179] * Verifying Kubernetes components...
	I1216 07:05:34.255848 1687487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 07:05:34.381731 1687487 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 07:05:34.396551 1687487 kapi.go:59] client config for ha-614518: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/client.crt", KeyFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/client.key", CAFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1216 07:05:34.396622 1687487 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1216 07:05:34.397115 1687487 node_ready.go:35] waiting up to 6m0s for node "ha-614518-m02" to be "Ready" ...
	I1216 07:05:37.040586 1687487 node_ready.go:49] node "ha-614518-m02" is "Ready"
	I1216 07:05:37.040621 1687487 node_ready.go:38] duration metric: took 2.643481502s for node "ha-614518-m02" to be "Ready" ...
	I1216 07:05:37.040635 1687487 api_server.go:52] waiting for apiserver process to appear ...
	I1216 07:05:37.040695 1687487 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:05:37.061374 1687487 api_server.go:72] duration metric: took 2.814094s to wait for apiserver process to appear ...
	I1216 07:05:37.061401 1687487 api_server.go:88] waiting for apiserver healthz status ...
	I1216 07:05:37.061420 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:37.074087 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:37.074124 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:37.561699 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:37.575722 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:37.575749 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:38.062105 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:38.073942 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:38.073979 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:38.561534 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:38.571539 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:38.571575 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:39.062243 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:39.070626 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:39.070656 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:39.562250 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:39.570668 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:39.570709 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:40.062490 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:40.071222 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:40.071258 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:40.561835 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:40.570234 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:40.570267 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:41.062517 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:41.070865 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:41.070907 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:41.562123 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:41.570314 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:41.570354 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:42.061560 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:42.070019 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:42.070066 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:42.561525 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:42.575709 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:42.575741 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:43.062386 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:43.072157 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:43.072235 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:43.561622 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:43.569766 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:43.569792 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:44.062378 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:44.073021 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:44.073060 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:44.562264 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:44.570578 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:44.570610 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:45.063004 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:45.074685 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:45.074724 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:45.562091 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:45.570321 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:45.570358 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:46.062073 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:46.070931 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:46.070966 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:46.561565 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:46.569995 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:46.570026 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:47.061616 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:47.072095 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:47.072131 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:47.561577 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:47.570812 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:47.570839 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:48.062047 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:48.070373 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:48.070403 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:48.562094 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:48.570453 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:48.570491 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:49.062122 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:49.070449 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:49.070490 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:49.561963 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:49.570228 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:49.570254 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:50.061859 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:50.070692 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:50.070727 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:50.562001 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:50.570230 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:50.570256 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:51.061757 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:51.070029 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:51.070062 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:51.561541 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:51.570443 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:51.570470 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:52.061863 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:52.070098 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:52.070127 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:52.561554 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:52.571992 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:52.572023 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:53.061596 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:53.069723 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:53.069756 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:53.562103 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:53.570175 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:53.570210 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:54.061674 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:54.069916 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:54.069946 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:54.561543 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:54.569758 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:54.569785 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:55.062452 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:55.071750 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:55.071778 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:55.562411 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:55.572141 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:55.572172 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:56.061606 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:56.070095 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:56.070177 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:56.561548 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:56.569665 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:56.569692 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:57.061801 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:57.069953 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:57.069981 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:57.561491 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:57.569864 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:57.569901 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:58.062468 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:58.070718 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:58.070747 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:58.562420 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:58.584824 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:58.584854 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:59.062385 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:59.070501 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:59.070541 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:05:59.561854 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:05:59.569961 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:05:59.569992 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:00.061869 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:00.114940 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:06:00.115034 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:00.561553 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:00.570378 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:06:00.570407 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:01.062023 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:01.070600 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:06:01.070633 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:01.562296 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:01.570659 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:06:01.570688 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:02.062180 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:02.070681 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:06:02.070728 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:02.562216 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:02.570655 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:06:02.570684 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:03.062338 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:03.071577 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:06:03.071605 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:03.562262 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:03.570378 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:06:03.570415 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:04.061866 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:04.070630 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:06:04.070665 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:04.562372 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:04.573063 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:06:04.573103 1687487 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:06:05.061594 1687487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 07:06:05.070425 1687487 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1216 07:06:05.071905 1687487 api_server.go:141] control plane version: v1.34.2
	I1216 07:06:05.071945 1687487 api_server.go:131] duration metric: took 28.010531893s to wait for apiserver health ...
	I1216 07:06:05.071959 1687487 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 07:06:05.081048 1687487 system_pods.go:59] 26 kube-system pods found
	I1216 07:06:05.081158 1687487 system_pods.go:61] "coredns-66bc5c9577-j2dlk" [7cdee874-13b2-4689-accf-e066854554a5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 07:06:05.081176 1687487 system_pods.go:61] "coredns-66bc5c9577-wnl5v" [9256d5c3-7034-467c-8cd0-d6f4987701c7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 07:06:05.081183 1687487 system_pods.go:61] "etcd-ha-614518" [dec5e097-b96b-40dd-a2f9-a9182668648e] Running
	I1216 07:06:05.081188 1687487 system_pods.go:61] "etcd-ha-614518-m02" [5998a7f5-5092-4768-b87a-c510c308efda] Running
	I1216 07:06:05.081192 1687487 system_pods.go:61] "etcd-ha-614518-m03" [d0a65bae-d842-4e55-85d9-ae1d6429088c] Running
	I1216 07:06:05.081197 1687487 system_pods.go:61] "kindnet-4gbf2" [b5285121-5662-466c-929f-6fe0e623e252] Running
	I1216 07:06:05.081201 1687487 system_pods.go:61] "kindnet-kwm49" [3a07c975-5ae6-434e-a9da-c68833c8a6dc] Running
	I1216 07:06:05.081204 1687487 system_pods.go:61] "kindnet-qpdxp" [44975bb5-380a-4313-99bd-df7510492688] Running
	I1216 07:06:05.081208 1687487 system_pods.go:61] "kindnet-t2849" [14c37491-38c8-4d32-89e2-d5065c21a976] Running
	I1216 07:06:05.081223 1687487 system_pods.go:61] "kube-apiserver-ha-614518" [51b10c5f-bf67-430b-85d7-ba31c2602e9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 07:06:05.081228 1687487 system_pods.go:61] "kube-apiserver-ha-614518-m02" [b25aee21-ddf1-4fc7-87e2-92a70d851d7a] Running
	I1216 07:06:05.081233 1687487 system_pods.go:61] "kube-apiserver-ha-614518-m03" [79a42481-9723-4f77-aec4-5d5727a98c63] Running
	I1216 07:06:05.081244 1687487 system_pods.go:61] "kube-controller-manager-ha-614518" [42894aa1-df0a-43d9-9a93-5b6141db631c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 07:06:05.081249 1687487 system_pods.go:61] "kube-controller-manager-ha-614518-m02" [984e14b1-d933-4792-b225-65a0fce5c8ac] Running
	I1216 07:06:05.081262 1687487 system_pods.go:61] "kube-controller-manager-ha-614518-m03" [b455cb3c-7c98-4ec2-9ce0-36e5c2f3b8cf] Running
	I1216 07:06:05.081266 1687487 system_pods.go:61] "kube-proxy-4kdt5" [45eb7aa5-bb99-4da3-883f-cdd380715c71] Running
	I1216 07:06:05.081270 1687487 system_pods.go:61] "kube-proxy-bmxpt" [573f4950-4197-4e95-90e8-93a2ec8bd016] Running
	I1216 07:06:05.081276 1687487 system_pods.go:61] "kube-proxy-fhwcs" [f6d4a561-d45e-4149-b00a-9fc8ef22017f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 07:06:05.081291 1687487 system_pods.go:61] "kube-proxy-qqr57" [bfce576a-7733-4a72-acf8-33d64dd3287a] Running
	I1216 07:06:05.081296 1687487 system_pods.go:61] "kube-scheduler-ha-614518" [ce73c116-9a87-4180-add6-fb07eb04c9a0] Running
	I1216 07:06:05.081301 1687487 system_pods.go:61] "kube-scheduler-ha-614518-m02" [249b5f83-63be-4691-87b1-5e25e13865ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 07:06:05.081305 1687487 system_pods.go:61] "kube-scheduler-ha-614518-m03" [db57c26e-9813-4b2b-b70b-0a07ed119aaa] Running
	I1216 07:06:05.081309 1687487 system_pods.go:61] "kube-vip-ha-614518" [e7bcfc9a-42b0-4066-9bb1-4abf917e98b9] Running
	I1216 07:06:05.081313 1687487 system_pods.go:61] "kube-vip-ha-614518-m02" [e662027d-d25a-4273-bdb7-9e21f666839e] Running
	I1216 07:06:05.081317 1687487 system_pods.go:61] "kube-vip-ha-614518-m03" [edab6af2-c513-479d-a2c8-c474380ca5d9] Running
	I1216 07:06:05.081323 1687487 system_pods.go:61] "storage-provisioner" [c8b9c00b-10bc-423c-b16e-3f3cdb12e907] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 07:06:05.081329 1687487 system_pods.go:74] duration metric: took 9.364099ms to wait for pod list to return data ...
	I1216 07:06:05.081337 1687487 default_sa.go:34] waiting for default service account to be created ...
	I1216 07:06:05.084727 1687487 default_sa.go:45] found service account: "default"
	I1216 07:06:05.084759 1687487 default_sa.go:55] duration metric: took 3.415392ms for default service account to be created ...
	I1216 07:06:05.084770 1687487 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 07:06:05.092252 1687487 system_pods.go:86] 26 kube-system pods found
	I1216 07:06:05.092293 1687487 system_pods.go:89] "coredns-66bc5c9577-j2dlk" [7cdee874-13b2-4689-accf-e066854554a5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 07:06:05.092305 1687487 system_pods.go:89] "coredns-66bc5c9577-wnl5v" [9256d5c3-7034-467c-8cd0-d6f4987701c7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 07:06:05.092311 1687487 system_pods.go:89] "etcd-ha-614518" [dec5e097-b96b-40dd-a2f9-a9182668648e] Running
	I1216 07:06:05.092318 1687487 system_pods.go:89] "etcd-ha-614518-m02" [5998a7f5-5092-4768-b87a-c510c308efda] Running
	I1216 07:06:05.092322 1687487 system_pods.go:89] "etcd-ha-614518-m03" [d0a65bae-d842-4e55-85d9-ae1d6429088c] Running
	I1216 07:06:05.092327 1687487 system_pods.go:89] "kindnet-4gbf2" [b5285121-5662-466c-929f-6fe0e623e252] Running
	I1216 07:06:05.092331 1687487 system_pods.go:89] "kindnet-kwm49" [3a07c975-5ae6-434e-a9da-c68833c8a6dc] Running
	I1216 07:06:05.092336 1687487 system_pods.go:89] "kindnet-qpdxp" [44975bb5-380a-4313-99bd-df7510492688] Running
	I1216 07:06:05.092346 1687487 system_pods.go:89] "kindnet-t2849" [14c37491-38c8-4d32-89e2-d5065c21a976] Running
	I1216 07:06:05.092353 1687487 system_pods.go:89] "kube-apiserver-ha-614518" [51b10c5f-bf67-430b-85d7-ba31c2602e9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 07:06:05.092360 1687487 system_pods.go:89] "kube-apiserver-ha-614518-m02" [b25aee21-ddf1-4fc7-87e2-92a70d851d7a] Running
	I1216 07:06:05.092365 1687487 system_pods.go:89] "kube-apiserver-ha-614518-m03" [79a42481-9723-4f77-aec4-5d5727a98c63] Running
	I1216 07:06:05.092376 1687487 system_pods.go:89] "kube-controller-manager-ha-614518" [42894aa1-df0a-43d9-9a93-5b6141db631c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 07:06:05.092381 1687487 system_pods.go:89] "kube-controller-manager-ha-614518-m02" [984e14b1-d933-4792-b225-65a0fce5c8ac] Running
	I1216 07:06:05.092388 1687487 system_pods.go:89] "kube-controller-manager-ha-614518-m03" [b455cb3c-7c98-4ec2-9ce0-36e5c2f3b8cf] Running
	I1216 07:06:05.092392 1687487 system_pods.go:89] "kube-proxy-4kdt5" [45eb7aa5-bb99-4da3-883f-cdd380715c71] Running
	I1216 07:06:05.092399 1687487 system_pods.go:89] "kube-proxy-bmxpt" [573f4950-4197-4e95-90e8-93a2ec8bd016] Running
	I1216 07:06:05.092411 1687487 system_pods.go:89] "kube-proxy-fhwcs" [f6d4a561-d45e-4149-b00a-9fc8ef22017f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 07:06:05.092416 1687487 system_pods.go:89] "kube-proxy-qqr57" [bfce576a-7733-4a72-acf8-33d64dd3287a] Running
	I1216 07:06:05.092421 1687487 system_pods.go:89] "kube-scheduler-ha-614518" [ce73c116-9a87-4180-add6-fb07eb04c9a0] Running
	I1216 07:06:05.092426 1687487 system_pods.go:89] "kube-scheduler-ha-614518-m02" [249b5f83-63be-4691-87b1-5e25e13865ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 07:06:05.092433 1687487 system_pods.go:89] "kube-scheduler-ha-614518-m03" [db57c26e-9813-4b2b-b70b-0a07ed119aaa] Running
	I1216 07:06:05.092438 1687487 system_pods.go:89] "kube-vip-ha-614518" [e7bcfc9a-42b0-4066-9bb1-4abf917e98b9] Running
	I1216 07:06:05.092445 1687487 system_pods.go:89] "kube-vip-ha-614518-m02" [e662027d-d25a-4273-bdb7-9e21f666839e] Running
	I1216 07:06:05.092449 1687487 system_pods.go:89] "kube-vip-ha-614518-m03" [edab6af2-c513-479d-a2c8-c474380ca5d9] Running
	I1216 07:06:05.092455 1687487 system_pods.go:89] "storage-provisioner" [c8b9c00b-10bc-423c-b16e-3f3cdb12e907] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 07:06:05.092495 1687487 system_pods.go:126] duration metric: took 7.68911ms to wait for k8s-apps to be running ...
	I1216 07:06:05.092507 1687487 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 07:06:05.092570 1687487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 07:06:05.107026 1687487 system_svc.go:56] duration metric: took 14.508711ms WaitForService to wait for kubelet
	I1216 07:06:05.107098 1687487 kubeadm.go:587] duration metric: took 30.859823393s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 07:06:05.107133 1687487 node_conditions.go:102] verifying NodePressure condition ...
	I1216 07:06:05.110974 1687487 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1216 07:06:05.111054 1687487 node_conditions.go:123] node cpu capacity is 2
	I1216 07:06:05.111086 1687487 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1216 07:06:05.111110 1687487 node_conditions.go:123] node cpu capacity is 2
	I1216 07:06:05.111145 1687487 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1216 07:06:05.111170 1687487 node_conditions.go:123] node cpu capacity is 2
	I1216 07:06:05.111190 1687487 node_conditions.go:105] duration metric: took 4.037891ms to run NodePressure ...
	I1216 07:06:05.111216 1687487 start.go:242] waiting for startup goroutines ...
	I1216 07:06:05.111269 1687487 start.go:256] writing updated cluster config ...
	I1216 07:06:05.116668 1687487 out.go:203] 
	I1216 07:06:05.120812 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:06:05.120934 1687487 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/config.json ...
	I1216 07:06:05.124552 1687487 out.go:179] * Starting "ha-614518-m04" worker node in "ha-614518" cluster
	I1216 07:06:05.128339 1687487 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 07:06:05.132036 1687487 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 07:06:05.135120 1687487 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 07:06:05.135153 1687487 cache.go:65] Caching tarball of preloaded images
	I1216 07:06:05.135238 1687487 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 07:06:05.135318 1687487 preload.go:238] Found /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 07:06:05.135332 1687487 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 07:06:05.135455 1687487 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/config.json ...
	I1216 07:06:05.157793 1687487 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 07:06:05.157815 1687487 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 07:06:05.157833 1687487 cache.go:243] Successfully downloaded all kic artifacts
	I1216 07:06:05.157859 1687487 start.go:360] acquireMachinesLock for ha-614518-m04: {Name:mk43a7770b67c048f75b229b4d32a0d7d442337b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 07:06:05.157933 1687487 start.go:364] duration metric: took 53.449µs to acquireMachinesLock for "ha-614518-m04"
	I1216 07:06:05.157958 1687487 start.go:96] Skipping create...Using existing machine configuration
	I1216 07:06:05.157970 1687487 fix.go:54] fixHost starting: m04
	I1216 07:06:05.158264 1687487 cli_runner.go:164] Run: docker container inspect ha-614518-m04 --format={{.State.Status}}
	I1216 07:06:05.178507 1687487 fix.go:112] recreateIfNeeded on ha-614518-m04: state=Stopped err=<nil>
	W1216 07:06:05.178535 1687487 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 07:06:05.182229 1687487 out.go:252] * Restarting existing docker container for "ha-614518-m04" ...
	I1216 07:06:05.182326 1687487 cli_runner.go:164] Run: docker start ha-614518-m04
	I1216 07:06:05.490568 1687487 cli_runner.go:164] Run: docker container inspect ha-614518-m04 --format={{.State.Status}}
	I1216 07:06:05.514214 1687487 kic.go:430] container "ha-614518-m04" state is running.
	I1216 07:06:05.514594 1687487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614518-m04
	I1216 07:06:05.536033 1687487 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/config.json ...
	I1216 07:06:05.536263 1687487 machine.go:94] provisionDockerMachine start ...
	I1216 07:06:05.536336 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m04
	I1216 07:06:05.566891 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:06:05.567347 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34320 <nil> <nil>}
	I1216 07:06:05.567367 1687487 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 07:06:05.568162 1687487 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1216 07:06:08.712253 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-614518-m04
	
	I1216 07:06:08.712286 1687487 ubuntu.go:182] provisioning hostname "ha-614518-m04"
	I1216 07:06:08.712350 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m04
	I1216 07:06:08.732562 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:06:08.732911 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34320 <nil> <nil>}
	I1216 07:06:08.732931 1687487 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-614518-m04 && echo "ha-614518-m04" | sudo tee /etc/hostname
	I1216 07:06:08.889442 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-614518-m04
	
	I1216 07:06:08.889531 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m04
	I1216 07:06:08.909382 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:06:08.909721 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34320 <nil> <nil>}
	I1216 07:06:08.909743 1687487 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-614518-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-614518-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-614518-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 07:06:09.077198 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 07:06:09.077226 1687487 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-1596013/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-1596013/.minikube}
	I1216 07:06:09.077243 1687487 ubuntu.go:190] setting up certificates
	I1216 07:06:09.077252 1687487 provision.go:84] configureAuth start
	I1216 07:06:09.077348 1687487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614518-m04
	I1216 07:06:09.099011 1687487 provision.go:143] copyHostCerts
	I1216 07:06:09.099061 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 07:06:09.099099 1687487 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem, removing ...
	I1216 07:06:09.099113 1687487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 07:06:09.099193 1687487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem (1675 bytes)
	I1216 07:06:09.099292 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 07:06:09.099317 1687487 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem, removing ...
	I1216 07:06:09.099324 1687487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 07:06:09.099359 1687487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem (1078 bytes)
	I1216 07:06:09.099417 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 07:06:09.099439 1687487 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem, removing ...
	I1216 07:06:09.099448 1687487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 07:06:09.099477 1687487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem (1123 bytes)
	I1216 07:06:09.099540 1687487 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem org=jenkins.ha-614518-m04 san=[127.0.0.1 192.168.49.5 ha-614518-m04 localhost minikube]
	I1216 07:06:09.342772 1687487 provision.go:177] copyRemoteCerts
	I1216 07:06:09.342883 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 07:06:09.342952 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m04
	I1216 07:06:09.362064 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34320 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518-m04/id_rsa Username:docker}
	I1216 07:06:09.461352 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1216 07:06:09.461413 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 07:06:09.488306 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1216 07:06:09.488377 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1216 07:06:09.511681 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1216 07:06:09.511745 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 07:06:09.532372 1687487 provision.go:87] duration metric: took 455.10562ms to configureAuth
	I1216 07:06:09.532402 1687487 ubuntu.go:206] setting minikube options for container-runtime
	I1216 07:06:09.532749 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:06:09.532862 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m04
	I1216 07:06:09.550583 1687487 main.go:143] libmachine: Using SSH client type: native
	I1216 07:06:09.550921 1687487 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34320 <nil> <nil>}
	I1216 07:06:09.550942 1687487 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 07:06:09.906062 1687487 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 07:06:09.906129 1687487 machine.go:97] duration metric: took 4.369846916s to provisionDockerMachine
	I1216 07:06:09.906156 1687487 start.go:293] postStartSetup for "ha-614518-m04" (driver="docker")
	I1216 07:06:09.906186 1687487 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 07:06:09.906302 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 07:06:09.906394 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m04
	I1216 07:06:09.928571 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34320 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518-m04/id_rsa Username:docker}
	I1216 07:06:10.043685 1687487 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 07:06:10.067794 1687487 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 07:06:10.067836 1687487 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 07:06:10.067850 1687487 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/addons for local assets ...
	I1216 07:06:10.067926 1687487 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/files for local assets ...
	I1216 07:06:10.068023 1687487 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> 15992552.pem in /etc/ssl/certs
	I1216 07:06:10.068034 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> /etc/ssl/certs/15992552.pem
	I1216 07:06:10.068175 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 07:06:10.080979 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 07:06:10.111023 1687487 start.go:296] duration metric: took 204.832511ms for postStartSetup
	I1216 07:06:10.111182 1687487 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 07:06:10.111258 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m04
	I1216 07:06:10.133434 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34320 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518-m04/id_rsa Username:docker}
	I1216 07:06:10.243926 1687487 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 07:06:10.252839 1687487 fix.go:56] duration metric: took 5.094861586s for fixHost
	I1216 07:06:10.252868 1687487 start.go:83] releasing machines lock for "ha-614518-m04", held for 5.094922297s
	I1216 07:06:10.252940 1687487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614518-m04
	I1216 07:06:10.273934 1687487 out.go:179] * Found network options:
	I1216 07:06:10.276892 1687487 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1216 07:06:10.279702 1687487 proxy.go:120] fail to check proxy env: Error ip not in block
	W1216 07:06:10.279739 1687487 proxy.go:120] fail to check proxy env: Error ip not in block
	W1216 07:06:10.279765 1687487 proxy.go:120] fail to check proxy env: Error ip not in block
	W1216 07:06:10.279776 1687487 proxy.go:120] fail to check proxy env: Error ip not in block
	I1216 07:06:10.279853 1687487 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 07:06:10.279897 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m04
	I1216 07:06:10.280186 1687487 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 07:06:10.280250 1687487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m04
	I1216 07:06:10.304141 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34320 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518-m04/id_rsa Username:docker}
	I1216 07:06:10.316532 1687487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34320 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518-m04/id_rsa Username:docker}
	I1216 07:06:10.464790 1687487 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 07:06:10.529284 1687487 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 07:06:10.529353 1687487 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 07:06:10.550769 1687487 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 07:06:10.550846 1687487 start.go:496] detecting cgroup driver to use...
	I1216 07:06:10.550924 1687487 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 07:06:10.551036 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 07:06:10.576598 1687487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 07:06:10.598097 1687487 docker.go:218] disabling cri-docker service (if available) ...
	I1216 07:06:10.598259 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 07:06:10.618172 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 07:06:10.634284 1687487 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 07:06:10.768085 1687487 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 07:06:10.900504 1687487 docker.go:234] disabling docker service ...
	I1216 07:06:10.900581 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 07:06:10.927152 1687487 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 07:06:10.942383 1687487 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 07:06:11.076847 1687487 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 07:06:11.223349 1687487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 07:06:11.239694 1687487 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 07:06:11.255054 1687487 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 07:06:11.255145 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:06:11.266034 1687487 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 07:06:11.266152 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:06:11.276524 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:06:11.286271 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:06:11.297358 1687487 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 07:06:11.307624 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:06:11.322735 1687487 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:06:11.331594 1687487 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:06:11.341363 1687487 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 07:06:11.355843 1687487 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 07:06:11.364696 1687487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 07:06:11.491229 1687487 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 07:06:11.671501 1687487 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 07:06:11.671633 1687487 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 07:06:11.675428 1687487 start.go:564] Will wait 60s for crictl version
	I1216 07:06:11.675526 1687487 ssh_runner.go:195] Run: which crictl
	I1216 07:06:11.679282 1687487 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 07:06:11.704854 1687487 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 07:06:11.704992 1687487 ssh_runner.go:195] Run: crio --version
	I1216 07:06:11.737456 1687487 ssh_runner.go:195] Run: crio --version
	I1216 07:06:11.775396 1687487 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 07:06:11.778421 1687487 out.go:179]   - env NO_PROXY=192.168.49.2
	I1216 07:06:11.781653 1687487 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1216 07:06:11.784682 1687487 cli_runner.go:164] Run: docker network inspect ha-614518 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 07:06:11.801080 1687487 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 07:06:11.805027 1687487 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 07:06:11.815307 1687487 mustload.go:66] Loading cluster: ha-614518
	I1216 07:06:11.815555 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:06:11.815814 1687487 cli_runner.go:164] Run: docker container inspect ha-614518 --format={{.State.Status}}
	I1216 07:06:11.835520 1687487 host.go:66] Checking if "ha-614518" exists ...
	I1216 07:06:11.835825 1687487 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518 for IP: 192.168.49.5
	I1216 07:06:11.835840 1687487 certs.go:195] generating shared ca certs ...
	I1216 07:06:11.835857 1687487 certs.go:227] acquiring lock for ca certs: {Name:mkbf72d2e438185e2867d262e148d82e5455cccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:06:11.835999 1687487 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key
	I1216 07:06:11.836046 1687487 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key
	I1216 07:06:11.836063 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1216 07:06:11.836076 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1216 07:06:11.836096 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1216 07:06:11.836113 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1216 07:06:11.836166 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem (1338 bytes)
	W1216 07:06:11.836212 1687487 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255_empty.pem, impossibly tiny 0 bytes
	I1216 07:06:11.836243 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 07:06:11.836281 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem (1078 bytes)
	I1216 07:06:11.836313 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem (1123 bytes)
	I1216 07:06:11.836348 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem (1675 bytes)
	I1216 07:06:11.836418 1687487 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 07:06:11.836451 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:06:11.836505 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem -> /usr/share/ca-certificates/1599255.pem
	I1216 07:06:11.836521 1687487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> /usr/share/ca-certificates/15992552.pem
	I1216 07:06:11.836544 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 07:06:11.859722 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 07:06:11.879459 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 07:06:11.899359 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 07:06:11.925816 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 07:06:11.944678 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem --> /usr/share/ca-certificates/1599255.pem (1338 bytes)
	I1216 07:06:11.966397 1687487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /usr/share/ca-certificates/15992552.pem (1708 bytes)
	I1216 07:06:11.991349 1687487 ssh_runner.go:195] Run: openssl version
	I1216 07:06:11.998038 1687487 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:06:12.010525 1687487 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 07:06:12.021207 1687487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:06:12.026113 1687487 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:06:12.026229 1687487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:06:12.070208 1687487 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 07:06:12.077832 1687487 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1599255.pem
	I1216 07:06:12.085281 1687487 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1599255.pem /etc/ssl/certs/1599255.pem
	I1216 07:06:12.093355 1687487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1599255.pem
	I1216 07:06:12.097389 1687487 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 06:24 /usr/share/ca-certificates/1599255.pem
	I1216 07:06:12.097457 1687487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1599255.pem
	I1216 07:06:12.138619 1687487 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 07:06:12.146494 1687487 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15992552.pem
	I1216 07:06:12.153809 1687487 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15992552.pem /etc/ssl/certs/15992552.pem
	I1216 07:06:12.162460 1687487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15992552.pem
	I1216 07:06:12.166549 1687487 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 06:24 /usr/share/ca-certificates/15992552.pem
	I1216 07:06:12.166660 1687487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15992552.pem
	I1216 07:06:12.214872 1687487 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 07:06:12.223038 1687487 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 07:06:12.226786 1687487 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 07:06:12.226832 1687487 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.2  false true} ...
	I1216 07:06:12.226911 1687487 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-614518-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-614518 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 07:06:12.227009 1687487 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 07:06:12.235141 1687487 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 07:06:12.235238 1687487 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1216 07:06:12.243052 1687487 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1216 07:06:12.258163 1687487 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 07:06:12.272841 1687487 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1216 07:06:12.276276 1687487 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 07:06:12.286557 1687487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 07:06:12.414923 1687487 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 07:06:12.430788 1687487 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime: ControlPlane:false Worker:true}
	I1216 07:06:12.431230 1687487 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:06:12.434498 1687487 out.go:179] * Verifying Kubernetes components...
	I1216 07:06:12.437537 1687487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 07:06:12.560193 1687487 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 07:06:12.575224 1687487 kapi.go:59] client config for ha-614518: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/client.crt", KeyFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/client.key", CAFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1216 07:06:12.575297 1687487 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1216 07:06:12.575574 1687487 node_ready.go:35] waiting up to 6m0s for node "ha-614518-m04" to be "Ready" ...
	I1216 07:06:12.580068 1687487 node_ready.go:49] node "ha-614518-m04" is "Ready"
	I1216 07:06:12.580146 1687487 node_ready.go:38] duration metric: took 4.550298ms for node "ha-614518-m04" to be "Ready" ...
	I1216 07:06:12.580174 1687487 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 07:06:12.580258 1687487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 07:06:12.596724 1687487 system_svc.go:56] duration metric: took 16.541875ms WaitForService to wait for kubelet
	I1216 07:06:12.596751 1687487 kubeadm.go:587] duration metric: took 165.918494ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 07:06:12.596771 1687487 node_conditions.go:102] verifying NodePressure condition ...
	I1216 07:06:12.600376 1687487 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1216 07:06:12.600404 1687487 node_conditions.go:123] node cpu capacity is 2
	I1216 07:06:12.600416 1687487 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1216 07:06:12.600421 1687487 node_conditions.go:123] node cpu capacity is 2
	I1216 07:06:12.600449 1687487 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1216 07:06:12.600453 1687487 node_conditions.go:123] node cpu capacity is 2
	I1216 07:06:12.600511 1687487 node_conditions.go:105] duration metric: took 3.699966ms to run NodePressure ...
	I1216 07:06:12.600548 1687487 start.go:242] waiting for startup goroutines ...
	I1216 07:06:12.600573 1687487 start.go:256] writing updated cluster config ...
	I1216 07:06:12.600919 1687487 ssh_runner.go:195] Run: rm -f paused
	I1216 07:06:12.604585 1687487 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 07:06:12.605147 1687487 kapi.go:59] client config for ha-614518: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/client.crt", KeyFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/ha-614518/client.key", CAFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 07:06:12.622024 1687487 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-j2dlk" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 07:06:14.630183 1687487 pod_ready.go:104] pod "coredns-66bc5c9577-j2dlk" is not "Ready", error: <nil>
	W1216 07:06:17.128396 1687487 pod_ready.go:104] pod "coredns-66bc5c9577-j2dlk" is not "Ready", error: <nil>
	W1216 07:06:19.129109 1687487 pod_ready.go:104] pod "coredns-66bc5c9577-j2dlk" is not "Ready", error: <nil>
	W1216 07:06:21.129471 1687487 pod_ready.go:104] pod "coredns-66bc5c9577-j2dlk" is not "Ready", error: <nil>
	W1216 07:06:23.629238 1687487 pod_ready.go:104] pod "coredns-66bc5c9577-j2dlk" is not "Ready", error: <nil>
	I1216 07:06:24.644123 1687487 pod_ready.go:94] pod "coredns-66bc5c9577-j2dlk" is "Ready"
	I1216 07:06:24.644155 1687487 pod_ready.go:86] duration metric: took 12.022101955s for pod "coredns-66bc5c9577-j2dlk" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:24.644167 1687487 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wnl5v" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:25.653985 1687487 pod_ready.go:94] pod "coredns-66bc5c9577-wnl5v" is "Ready"
	I1216 07:06:25.654011 1687487 pod_ready.go:86] duration metric: took 1.009837557s for pod "coredns-66bc5c9577-wnl5v" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:25.657436 1687487 pod_ready.go:83] waiting for pod "etcd-ha-614518" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:25.663112 1687487 pod_ready.go:94] pod "etcd-ha-614518" is "Ready"
	I1216 07:06:25.663199 1687487 pod_ready.go:86] duration metric: took 5.737586ms for pod "etcd-ha-614518" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:25.663224 1687487 pod_ready.go:83] waiting for pod "etcd-ha-614518-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:25.668572 1687487 pod_ready.go:94] pod "etcd-ha-614518-m02" is "Ready"
	I1216 07:06:25.668654 1687487 pod_ready.go:86] duration metric: took 5.405889ms for pod "etcd-ha-614518-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:25.668681 1687487 pod_ready.go:83] waiting for pod "etcd-ha-614518-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:25.673835 1687487 pod_ready.go:99] pod "etcd-ha-614518-m03" in "kube-system" namespace is gone: node "ha-614518-m03" hosting pod "etcd-ha-614518-m03" is not found/running (skipping!): nodes "ha-614518-m03" not found
	I1216 07:06:25.673908 1687487 pod_ready.go:86] duration metric: took 5.206207ms for pod "etcd-ha-614518-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:25.823380 1687487 request.go:683] "Waited before sending request" delay="149.293024ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1216 07:06:25.826990 1687487 pod_ready.go:83] waiting for pod "kube-apiserver-ha-614518" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:26.023449 1687487 request.go:683] "Waited before sending request" delay="196.318606ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-614518"
	I1216 07:06:26.223386 1687487 request.go:683] "Waited before sending request" delay="196.351246ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518"
	I1216 07:06:26.226414 1687487 pod_ready.go:94] pod "kube-apiserver-ha-614518" is "Ready"
	I1216 07:06:26.226443 1687487 pod_ready.go:86] duration metric: took 399.426362ms for pod "kube-apiserver-ha-614518" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:26.226454 1687487 pod_ready.go:83] waiting for pod "kube-apiserver-ha-614518-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:26.422838 1687487 request.go:683] "Waited before sending request" delay="196.262613ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-614518-m02"
	I1216 07:06:26.623137 1687487 request.go:683] "Waited before sending request" delay="197.08654ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518-m02"
	I1216 07:06:26.626398 1687487 pod_ready.go:94] pod "kube-apiserver-ha-614518-m02" is "Ready"
	I1216 07:06:26.626428 1687487 pod_ready.go:86] duration metric: took 399.966937ms for pod "kube-apiserver-ha-614518-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:26.626438 1687487 pod_ready.go:83] waiting for pod "kube-apiserver-ha-614518-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:26.822787 1687487 request.go:683] "Waited before sending request" delay="196.265148ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-614518-m03"
	I1216 07:06:27.023430 1687487 request.go:683] "Waited before sending request" delay="197.365ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518-m03"
	I1216 07:06:27.026875 1687487 pod_ready.go:99] pod "kube-apiserver-ha-614518-m03" in "kube-system" namespace is gone: node "ha-614518-m03" hosting pod "kube-apiserver-ha-614518-m03" is not found/running (skipping!): nodes "ha-614518-m03" not found
	I1216 07:06:27.026914 1687487 pod_ready.go:86] duration metric: took 400.4598ms for pod "kube-apiserver-ha-614518-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:27.223376 1687487 request.go:683] "Waited before sending request" delay="196.348931ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1216 07:06:27.227355 1687487 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-614518" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:27.423607 1687487 request.go:683] "Waited before sending request" delay="196.15765ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-614518"
	I1216 07:06:27.623198 1687487 request.go:683] "Waited before sending request" delay="196.252798ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518"
	I1216 07:06:27.822756 1687487 request.go:683] "Waited before sending request" delay="94.181569ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-614518"
	I1216 07:06:28.023498 1687487 request.go:683] "Waited before sending request" delay="197.337742ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518"
	I1216 07:06:28.423277 1687487 request.go:683] "Waited before sending request" delay="191.324919ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518"
	I1216 07:06:28.823130 1687487 request.go:683] "Waited before sending request" delay="90.229358ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518"
	W1216 07:06:29.235219 1687487 pod_ready.go:104] pod "kube-controller-manager-ha-614518" is not "Ready", error: <nil>
	W1216 07:06:31.235951 1687487 pod_ready.go:104] pod "kube-controller-manager-ha-614518" is not "Ready", error: <nil>
	W1216 07:06:33.734756 1687487 pod_ready.go:104] pod "kube-controller-manager-ha-614518" is not "Ready", error: <nil>
	W1216 07:06:35.735390 1687487 pod_ready.go:104] pod "kube-controller-manager-ha-614518" is not "Ready", error: <nil>
	W1216 07:06:38.234527 1687487 pod_ready.go:104] pod "kube-controller-manager-ha-614518" is not "Ready", error: <nil>
	W1216 07:06:40.734172 1687487 pod_ready.go:104] pod "kube-controller-manager-ha-614518" is not "Ready", error: <nil>
	W1216 07:06:42.734590 1687487 pod_ready.go:104] pod "kube-controller-manager-ha-614518" is not "Ready", error: <nil>
	I1216 07:06:43.234658 1687487 pod_ready.go:94] pod "kube-controller-manager-ha-614518" is "Ready"
	I1216 07:06:43.234687 1687487 pod_ready.go:86] duration metric: took 16.007305361s for pod "kube-controller-manager-ha-614518" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.234697 1687487 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-614518-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.246154 1687487 pod_ready.go:94] pod "kube-controller-manager-ha-614518-m02" is "Ready"
	I1216 07:06:43.246184 1687487 pod_ready.go:86] duration metric: took 11.479167ms for pod "kube-controller-manager-ha-614518-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.246194 1687487 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-614518-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.251708 1687487 pod_ready.go:99] pod "kube-controller-manager-ha-614518-m03" in "kube-system" namespace is gone: node "ha-614518-m03" hosting pod "kube-controller-manager-ha-614518-m03" is not found/running (skipping!): nodes "ha-614518-m03" not found
	I1216 07:06:43.251789 1687487 pod_ready.go:86] duration metric: took 5.587232ms for pod "kube-controller-manager-ha-614518-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.255005 1687487 pod_ready.go:83] waiting for pod "kube-proxy-4kdt5" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.260772 1687487 pod_ready.go:94] pod "kube-proxy-4kdt5" is "Ready"
	I1216 07:06:43.260800 1687487 pod_ready.go:86] duration metric: took 5.764523ms for pod "kube-proxy-4kdt5" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.260811 1687487 pod_ready.go:83] waiting for pod "kube-proxy-bmxpt" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.427957 1687487 request.go:683] "Waited before sending request" delay="164.183098ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518-m04"
	I1216 07:06:43.431695 1687487 pod_ready.go:94] pod "kube-proxy-bmxpt" is "Ready"
	I1216 07:06:43.431727 1687487 pod_ready.go:86] duration metric: took 170.908436ms for pod "kube-proxy-bmxpt" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.431744 1687487 pod_ready.go:83] waiting for pod "kube-proxy-fhwcs" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.628038 1687487 request.go:683] "Waited before sending request" delay="196.208729ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fhwcs"
	I1216 07:06:43.827976 1687487 request.go:683] "Waited before sending request" delay="196.30094ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518-m02"
	I1216 07:06:43.837294 1687487 pod_ready.go:94] pod "kube-proxy-fhwcs" is "Ready"
	I1216 07:06:43.837327 1687487 pod_ready.go:86] duration metric: took 405.576793ms for pod "kube-proxy-fhwcs" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:43.837339 1687487 pod_ready.go:83] waiting for pod "kube-proxy-qqr57" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:44.028582 1687487 request.go:683] "Waited before sending request" delay="191.164568ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qqr57"
	I1216 07:06:44.031704 1687487 pod_ready.go:99] pod "kube-proxy-qqr57" in "kube-system" namespace is gone: getting pod "kube-proxy-qqr57" in "kube-system" namespace (will retry): pods "kube-proxy-qqr57" not found
	I1216 07:06:44.031728 1687487 pod_ready.go:86] duration metric: took 194.382484ms for pod "kube-proxy-qqr57" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:44.228023 1687487 request.go:683] "Waited before sending request" delay="196.190299ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I1216 07:06:44.234797 1687487 pod_ready.go:83] waiting for pod "kube-scheduler-ha-614518" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:44.428282 1687487 request.go:683] "Waited before sending request" delay="193.336711ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-614518"
	I1216 07:06:44.627997 1687487 request.go:683] "Waited before sending request" delay="196.267207ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518"
	I1216 07:06:44.631577 1687487 pod_ready.go:94] pod "kube-scheduler-ha-614518" is "Ready"
	I1216 07:06:44.631604 1687487 pod_ready.go:86] duration metric: took 396.729655ms for pod "kube-scheduler-ha-614518" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:44.631613 1687487 pod_ready.go:83] waiting for pod "kube-scheduler-ha-614518-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:06:44.828815 1687487 request.go:683] "Waited before sending request" delay="197.130733ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-614518-m02"
	I1216 07:06:45.028338 1687487 request.go:683] "Waited before sending request" delay="191.46624ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518-m02"
	I1216 07:06:45.228724 1687487 request.go:683] "Waited before sending request" delay="96.318053ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-614518-m02"
	I1216 07:06:45.428563 1687487 request.go:683] "Waited before sending request" delay="191.750075ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518-m02"
	I1216 07:06:45.828353 1687487 request.go:683] "Waited before sending request" delay="192.34026ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518-m02"
	I1216 07:06:46.228325 1687487 request.go:683] "Waited before sending request" delay="93.248724ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-614518-m02"
	W1216 07:06:46.637948 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:06:49.139119 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:06:51.638109 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:06:53.638454 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:06:56.139011 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:06:58.638095 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:00.638769 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:03.139265 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:05.638593 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:07.638799 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:10.138642 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:12.638602 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:14.641618 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:17.139071 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:19.638792 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:22.138682 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:24.143581 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:26.637942 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:28.638514 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:30.639228 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:32.639571 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:35.139503 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:37.142108 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:39.637866 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:41.638931 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:44.139294 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:46.638205 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:48.638829 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:50.643744 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:53.139962 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:55.140229 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:07:57.638356 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:00.161064 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:02.638288 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:04.640454 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:07.138771 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:09.638023 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:11.638274 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:13.638989 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:16.137649 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:18.138649 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:20.138856 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:22.638044 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:25.139148 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:27.638438 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:29.638561 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:31.638878 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:34.138583 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:36.638791 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:39.138672 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:41.143386 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:43.638185 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:45.640021 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:48.137933 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:50.638587 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:53.138384 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:55.138692 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:08:57.638524 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:00.191960 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:02.638290 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:04.639287 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:07.139404 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:09.638715 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:12.137968 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:14.138290 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:16.138420 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:18.638585 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:20.639656 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:23.138623 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:25.638409 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:27.643066 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:30.140779 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:32.638747 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:34.639250 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:37.137644 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:39.138045 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:41.138733 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:43.139171 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:45.142012 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:47.638719 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:50.139130 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:52.637794 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:54.638451 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:57.137807 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:09:59.640347 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:10:02.138615 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:10:04.140843 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:10:06.639153 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:10:09.139049 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	W1216 07:10:11.139172 1687487 pod_ready.go:104] pod "kube-scheduler-ha-614518-m02" is not "Ready", error: <nil>
	I1216 07:10:12.605718 1687487 pod_ready.go:86] duration metric: took 3m27.974087596s for pod "kube-scheduler-ha-614518-m02" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 07:10:12.605749 1687487 pod_ready.go:65] not all pods in "kube-system" namespace with "component=kube-scheduler" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1216 07:10:12.605764 1687487 pod_ready.go:40] duration metric: took 4m0.001147095s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 07:10:12.608877 1687487 out.go:203] 
	W1216 07:10:12.611764 1687487 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1216 07:10:12.614690 1687487 out.go:203] 
	
	
	==> CRI-O <==
	Dec 16 07:06:33 ha-614518 crio[669]: time="2025-12-16T07:06:33.124962814Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 16 07:06:33 ha-614518 crio[669]: time="2025-12-16T07:06:33.124989079Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 16 07:06:33 ha-614518 crio[669]: time="2025-12-16T07:06:33.128952589Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 16 07:06:33 ha-614518 crio[669]: time="2025-12-16T07:06:33.128991022Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 16 07:06:33 ha-614518 crio[669]: time="2025-12-16T07:06:33.12901366Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 16 07:06:33 ha-614518 crio[669]: time="2025-12-16T07:06:33.132385483Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 16 07:06:33 ha-614518 crio[669]: time="2025-12-16T07:06:33.132445241Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 16 07:06:33 ha-614518 crio[669]: time="2025-12-16T07:06:33.132506854Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 16 07:06:33 ha-614518 crio[669]: time="2025-12-16T07:06:33.13550428Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 16 07:06:33 ha-614518 crio[669]: time="2025-12-16T07:06:33.135541393Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 16 07:06:40 ha-614518 conmon[1338]: conmon 5fb83a33391310c66121 <ninfo>: container 1340 exited with status 1
	Dec 16 07:06:41 ha-614518 crio[669]: time="2025-12-16T07:06:41.136243764Z" level=info msg="Removing container: 4b611d8c213d6b291fb7a3b72450bf97b5b458e31038413638c4e1e9a6beaaf7" id=a7929592-8844-46c4-be42-8dc29f75bdf8 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 07:06:41 ha-614518 crio[669]: time="2025-12-16T07:06:41.144051183Z" level=info msg="Error loading conmon cgroup of container 4b611d8c213d6b291fb7a3b72450bf97b5b458e31038413638c4e1e9a6beaaf7: cgroup deleted" id=a7929592-8844-46c4-be42-8dc29f75bdf8 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 07:06:41 ha-614518 crio[669]: time="2025-12-16T07:06:41.148857672Z" level=info msg="Removed container 4b611d8c213d6b291fb7a3b72450bf97b5b458e31038413638c4e1e9a6beaaf7: kube-system/storage-provisioner/storage-provisioner" id=a7929592-8844-46c4-be42-8dc29f75bdf8 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 07:07:21 ha-614518 crio[669]: time="2025-12-16T07:07:21.517109075Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=970ca7aa-d95d-4794-95bf-de423f4d674f name=/runtime.v1.ImageService/ImageStatus
	Dec 16 07:07:21 ha-614518 crio[669]: time="2025-12-16T07:07:21.51851262Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a32745bb-0259-4638-a2da-ddc22003b22b name=/runtime.v1.ImageService/ImageStatus
	Dec 16 07:07:21 ha-614518 crio[669]: time="2025-12-16T07:07:21.519651775Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=d94c0706-3299-449d-b1bc-9c7684af150f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 07:07:21 ha-614518 crio[669]: time="2025-12-16T07:07:21.519773393Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 07:07:21 ha-614518 crio[669]: time="2025-12-16T07:07:21.524607418Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 07:07:21 ha-614518 crio[669]: time="2025-12-16T07:07:21.524785537Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/312caaa5394938283ea578f1d27f8818b3e8f0134608b0a17d956f12767c2e19/merged/etc/passwd: no such file or directory"
	Dec 16 07:07:21 ha-614518 crio[669]: time="2025-12-16T07:07:21.524806846Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/312caaa5394938283ea578f1d27f8818b3e8f0134608b0a17d956f12767c2e19/merged/etc/group: no such file or directory"
	Dec 16 07:07:21 ha-614518 crio[669]: time="2025-12-16T07:07:21.525065295Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 07:07:21 ha-614518 crio[669]: time="2025-12-16T07:07:21.544389448Z" level=info msg="Created container 1093de574e036685973850230e9a40aa67d2a34b14bfd15aac259b4e32258a56: kube-system/storage-provisioner/storage-provisioner" id=d94c0706-3299-449d-b1bc-9c7684af150f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 07:07:21 ha-614518 crio[669]: time="2025-12-16T07:07:21.546147943Z" level=info msg="Starting container: 1093de574e036685973850230e9a40aa67d2a34b14bfd15aac259b4e32258a56" id=2384d913-1660-45b4-a9e4-4a12ccf89aa1 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 07:07:21 ha-614518 crio[669]: time="2025-12-16T07:07:21.552972588Z" level=info msg="Started container" PID=1546 containerID=1093de574e036685973850230e9a40aa67d2a34b14bfd15aac259b4e32258a56 description=kube-system/storage-provisioner/storage-provisioner id=2384d913-1660-45b4-a9e4-4a12ccf89aa1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=66940fef199cd7ea95fa467d76afd336228ac898a0c1f0e8c7b18e7972031eff
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	1093de574e036       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   4 minutes ago       Running             storage-provisioner       7                   66940fef199cd       storage-provisioner                 kube-system
	62f5148caf573       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   5 minutes ago       Running             kube-controller-manager   8                   b4a4e435e1aa0       kube-controller-manager-ha-614518   kube-system
	5fb83a3339131       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   5 minutes ago       Exited              storage-provisioner       6                   66940fef199cd       storage-provisioner                 kube-system
	95092e298b4a2       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   6 minutes ago       Exited              kube-controller-manager   7                   b4a4e435e1aa0       kube-controller-manager-ha-614518   kube-system
	d39155885e822       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   2                   041859eb301b3       coredns-66bc5c9577-j2dlk            kube-system
	6e64e350bfcdb       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 minutes ago       Running             kindnet-cni               2                   ceeed389a3540       kindnet-t2849                       kube-system
	df7febb900c92       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   7 minutes ago       Running             kube-proxy                2                   e6f1de1edc5ee       kube-proxy-4kdt5                    kube-system
	e3a995a401390       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   7 minutes ago       Running             busybox                   2                   288cd575c38a7       busybox-7b57f96db7-9rkhz            default
	a0d878c4d93ed       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   2                   6735c66af1b27       coredns-66bc5c9577-wnl5v            kube-system
	11e4b44d62d54       369db9dfa6fa96c1f4a0f3c827dbe864b5ded1802c8b4810b5ff9fcc5f5f2c70   7 minutes ago       Running             kube-vip                  2                   01654879d92ce       kube-vip-ha-614518                  kube-system
	b6e4d702970e6       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   7 minutes ago       Running             etcd                      2                   b24e85033a9a6       etcd-ha-614518                      kube-system
	c0e9d15ebb1cd       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   7 minutes ago       Running             kube-scheduler            2                   2ec038c0eb369       kube-scheduler-ha-614518            kube-system
	db591d0d437f8       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   7 minutes ago       Running             kube-apiserver            2                   3ea7ac550801f       kube-apiserver-ha-614518            kube-system
	
	
	==> coredns [a0d878c4d93ed5aa6b99a6ea96df4f5ccb53c918a3bac903f7dae29fc1cf61ee] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [d39155885e822c355840ab6f40d6597b04bb705e1978f74a686ce74f90174ae9] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-614518
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-614518
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f
	                    minikube.k8s.io/name=ha-614518
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T06_55_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 06:55:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-614518
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 07:11:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 07:10:30 +0000   Tue, 16 Dec 2025 06:55:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 07:10:30 +0000   Tue, 16 Dec 2025 06:55:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 07:10:30 +0000   Tue, 16 Dec 2025 06:55:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 07:10:30 +0000   Tue, 16 Dec 2025 07:02:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-614518
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                95037a50-a335-45c4-b961-153de44dd8af
	  Boot ID:                    c02b8f3a-b639-46a9-b38c-18c198a7a8c0
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-9rkhz             0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-66bc5c9577-j2dlk             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     15m
	  kube-system                 coredns-66bc5c9577-wnl5v             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     15m
	  kube-system                 etcd-ha-614518                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         16m
	  kube-system                 kindnet-t2849                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      15m
	  kube-system                 kube-apiserver-ha-614518             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-614518    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-4kdt5                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-614518             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-614518                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m39s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 15m                    kube-proxy       
	  Normal   Starting                 5m20s                  kube-proxy       
	  Normal   Starting                 9m38s                  kube-proxy       
	  Warning  CgroupV1                 16m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  16m                    kubelet          Node ha-614518 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m                    kubelet          Node ha-614518 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16m                    kubelet          Node ha-614518 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           15m                    node-controller  Node ha-614518 event: Registered Node ha-614518 in Controller
	  Normal   RegisteredNode           15m                    node-controller  Node ha-614518 event: Registered Node ha-614518 in Controller
	  Normal   NodeReady                15m                    kubelet          Node ha-614518 status is now: NodeReady
	  Normal   RegisteredNode           14m                    node-controller  Node ha-614518 event: Registered Node ha-614518 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-614518 event: Registered Node ha-614518 in Controller
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node ha-614518 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node ha-614518 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)      kubelet          Node ha-614518 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m18s                  node-controller  Node ha-614518 event: Registered Node ha-614518 in Controller
	  Normal   RegisteredNode           9m9s                   node-controller  Node ha-614518 event: Registered Node ha-614518 in Controller
	  Normal   RegisteredNode           8m57s                  node-controller  Node ha-614518 event: Registered Node ha-614518 in Controller
	  Normal   Starting                 7m48s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m48s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7m48s (x8 over 7m48s)  kubelet          Node ha-614518 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m48s (x8 over 7m48s)  kubelet          Node ha-614518 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m48s (x8 over 7m48s)  kubelet          Node ha-614518 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m57s                  node-controller  Node ha-614518 event: Registered Node ha-614518 in Controller
	  Normal   RegisteredNode           5m8s                   node-controller  Node ha-614518 event: Registered Node ha-614518 in Controller
	  Normal   RegisteredNode           49s                    node-controller  Node ha-614518 event: Registered Node ha-614518 in Controller
	
	
	Name:               ha-614518-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-614518-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f
	                    minikube.k8s.io/name=ha-614518
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_16T06_56_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 06:56:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-614518-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 07:11:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 07:11:38 +0000   Tue, 16 Dec 2025 07:00:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 07:11:38 +0000   Tue, 16 Dec 2025 07:00:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 07:11:38 +0000   Tue, 16 Dec 2025 07:00:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 07:11:38 +0000   Tue, 16 Dec 2025 07:00:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-614518-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                0e50aad9-c8f5-4539-a363-29b4940497ef
	  Boot ID:                    c02b8f3a-b639-46a9-b38c-18c198a7a8c0
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-q9kjv                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-614518-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         15m
	  kube-system                 kindnet-qpdxp                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      15m
	  kube-system                 kube-apiserver-ha-614518-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-614518-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-fhwcs                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-614518-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-614518-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 15m                    kube-proxy       
	  Normal   Starting                 5m33s                  kube-proxy       
	  Normal   Starting                 9m27s                  kube-proxy       
	  Normal   RegisteredNode           15m                    node-controller  Node ha-614518-m02 event: Registered Node ha-614518-m02 in Controller
	  Normal   RegisteredNode           15m                    node-controller  Node ha-614518-m02 event: Registered Node ha-614518-m02 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-614518-m02 event: Registered Node ha-614518-m02 in Controller
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node ha-614518-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node ha-614518-m02 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 11m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 11m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     11m (x8 over 11m)      kubelet          Node ha-614518-m02 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             10m                    node-controller  Node ha-614518-m02 status is now: NodeNotReady
	  Normal   RegisteredNode           10m                    node-controller  Node ha-614518-m02 event: Registered Node ha-614518-m02 in Controller
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)      kubelet          Node ha-614518-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node ha-614518-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node ha-614518-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           9m18s                  node-controller  Node ha-614518-m02 event: Registered Node ha-614518-m02 in Controller
	  Normal   RegisteredNode           9m9s                   node-controller  Node ha-614518-m02 event: Registered Node ha-614518-m02 in Controller
	  Normal   RegisteredNode           8m57s                  node-controller  Node ha-614518-m02 event: Registered Node ha-614518-m02 in Controller
	  Normal   Starting                 7m45s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m45s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7m45s (x8 over 7m45s)  kubelet          Node ha-614518-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m45s (x8 over 7m45s)  kubelet          Node ha-614518-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m45s (x8 over 7m45s)  kubelet          Node ha-614518-m02 status is now: NodeHasSufficientPID
	  Warning  ContainerGCFailed        6m45s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           5m57s                  node-controller  Node ha-614518-m02 event: Registered Node ha-614518-m02 in Controller
	  Normal   RegisteredNode           5m8s                   node-controller  Node ha-614518-m02 event: Registered Node ha-614518-m02 in Controller
	  Normal   RegisteredNode           49s                    node-controller  Node ha-614518-m02 event: Registered Node ha-614518-m02 in Controller
	
	
	Name:               ha-614518-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-614518-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f
	                    minikube.k8s.io/name=ha-614518
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_16T06_58_56_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 06:58:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-614518-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 07:11:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 07:08:41 +0000   Tue, 16 Dec 2025 06:58:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 07:08:41 +0000   Tue, 16 Dec 2025 06:58:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 07:08:41 +0000   Tue, 16 Dec 2025 06:58:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 07:08:41 +0000   Tue, 16 Dec 2025 06:59:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-614518-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                b5a1c428-1aac-458a-ac8c-b2278f4653df
	  Boot ID:                    c02b8f3a-b639-46a9-b38c-18c198a7a8c0
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-d8h6z    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m44s
	  kube-system                 kindnet-kwm49               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-proxy-bmxpt            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 8m46s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 5m16s                  kube-proxy       
	  Normal   NodeHasNoDiskPressure    12m (x3 over 12m)      kubelet          Node ha-614518-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  12m (x3 over 12m)      kubelet          Node ha-614518-m04 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 12m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     12m (x3 over 12m)      kubelet          Node ha-614518-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                    node-controller  Node ha-614518-m04 event: Registered Node ha-614518-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-614518-m04 event: Registered Node ha-614518-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-614518-m04 event: Registered Node ha-614518-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-614518-m04 status is now: NodeReady
	  Normal   RegisteredNode           10m                    node-controller  Node ha-614518-m04 event: Registered Node ha-614518-m04 in Controller
	  Normal   RegisteredNode           9m18s                  node-controller  Node ha-614518-m04 event: Registered Node ha-614518-m04 in Controller
	  Normal   Starting                 9m10s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m10s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   RegisteredNode           9m9s                   node-controller  Node ha-614518-m04 event: Registered Node ha-614518-m04 in Controller
	  Normal   NodeHasSufficientPID     9m6s (x8 over 9m9s)    kubelet          Node ha-614518-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  9m6s (x8 over 9m9s)    kubelet          Node ha-614518-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m6s (x8 over 9m9s)    kubelet          Node ha-614518-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           8m57s                  node-controller  Node ha-614518-m04 event: Registered Node ha-614518-m04 in Controller
	  Normal   RegisteredNode           5m57s                  node-controller  Node ha-614518-m04 event: Registered Node ha-614518-m04 in Controller
	  Warning  CgroupV1                 5m34s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 5m34s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  5m31s (x8 over 5m34s)  kubelet          Node ha-614518-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m31s (x8 over 5m34s)  kubelet          Node ha-614518-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m31s (x8 over 5m34s)  kubelet          Node ha-614518-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m8s                   node-controller  Node ha-614518-m04 event: Registered Node ha-614518-m04 in Controller
	  Normal   RegisteredNode           49s                    node-controller  Node ha-614518-m04 event: Registered Node ha-614518-m04 in Controller
	
	
	Name:               ha-614518-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-614518-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f
	                    minikube.k8s.io/name=ha-614518
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_16T07_10_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 07:10:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-614518-m05
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 07:11:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 07:11:36 +0000   Tue, 16 Dec 2025 07:10:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 07:11:36 +0000   Tue, 16 Dec 2025 07:10:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 07:11:36 +0000   Tue, 16 Dec 2025 07:10:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 07:11:36 +0000   Tue, 16 Dec 2025 07:11:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.6
	  Hostname:    ha-614518-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                b2889b53-3b32-4256-b474-ff5d8602327e
	  Boot ID:                    c02b8f3a-b639-46a9-b38c-18c198a7a8c0
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-614518-m05                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         46s
	  kube-system                 kindnet-g6h5z                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      47s
	  kube-system                 kube-apiserver-ha-614518-m05             250m (12%)    0 (0%)      0 (0%)           0 (0%)         46s
	  kube-system                 kube-controller-manager-ha-614518-m05    200m (10%)    0 (0%)      0 (0%)           0 (0%)         46s
	  kube-system                 kube-proxy-5x2t4                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kube-system                 kube-scheduler-ha-614518-m05             100m (5%)     0 (0%)      0 (0%)           0 (0%)         46s
	  kube-system                 kube-vip-ha-614518-m05                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        45s   kube-proxy       
	  Normal  RegisteredNode  47s   node-controller  Node ha-614518-m05 event: Registered Node ha-614518-m05 in Controller
	  Normal  RegisteredNode  44s   node-controller  Node ha-614518-m05 event: Registered Node ha-614518-m05 in Controller
	  Normal  RegisteredNode  43s   node-controller  Node ha-614518-m05 event: Registered Node ha-614518-m05 in Controller
	
	
	==> dmesg <==
	[Dec16 06:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec16 06:13] overlayfs: idmapped layers are currently not supported
	[Dec16 06:19] overlayfs: idmapped layers are currently not supported
	[Dec16 06:20] overlayfs: idmapped layers are currently not supported
	[Dec16 06:38] overlayfs: idmapped layers are currently not supported
	[Dec16 06:55] overlayfs: idmapped layers are currently not supported
	[Dec16 06:56] overlayfs: idmapped layers are currently not supported
	[Dec16 06:57] overlayfs: idmapped layers are currently not supported
	[Dec16 06:58] overlayfs: idmapped layers are currently not supported
	[Dec16 07:00] overlayfs: idmapped layers are currently not supported
	[Dec16 07:01] overlayfs: idmapped layers are currently not supported
	[  +3.826905] overlayfs: idmapped layers are currently not supported
	[Dec16 07:02] overlayfs: idmapped layers are currently not supported
	[ +35.241631] overlayfs: idmapped layers are currently not supported
	[Dec16 07:03] overlayfs: idmapped layers are currently not supported
	[  +2.815105] overlayfs: idmapped layers are currently not supported
	[Dec16 07:06] overlayfs: idmapped layers are currently not supported
	[Dec16 07:10] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b6e4d702970e634028ab9da9ca8e258d02bb0aa908a74a428d72bd35cdec320d] <==
	{"level":"info","ts":"2025-12-16T07:10:39.506963Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"d35424e99d091485"}
	{"level":"info","ts":"2025-12-16T07:10:39.553990Z","caller":"etcdserver/snapshot_merge.go:64","msg":"sent database snapshot to writer","bytes":7176192,"size":"7.2 MB"}
	{"level":"error","ts":"2025-12-16T07:10:39.630968Z","caller":"etcdserver/server.go:1601","msg":"rejecting promote learner: learner is not ready","learner-ready-percent":0,"ready-percent-threshold":0.9,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).isLearnerReady\n\tgo.etcd.io/etcd/server/v3/etcdserver/server.go:1601\ngo.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).mayPromoteMember\n\tgo.etcd.io/etcd/server/v3/etcdserver/server.go:1542\ngo.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).promoteMember\n\tgo.etcd.io/etcd/server/v3/etcdserver/server.go:1514\ngo.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).PromoteMember\n\tgo.etcd.io/etcd/server/v3/etcdserver/server.go:1466\ngo.etcd.io/etcd/server/v3/etcdserver/api/v3rpc.(*ClusterServer).MemberPromote\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/v3rpc/member.go:101\ngo.etcd.io/etcd/api/v3/etcdserverpb._Cluster_MemberPromote_Handler.func1\n\tgo.etcd.io/etcd/api/v3@v3.6.5/etcdserverpb/rpc.pb.go:7432\ngo.etcd.io/etcd/server/v3/etcdserv
er/api/v3rpc.Server.(*ServerMetrics).UnaryServerInterceptor.UnaryServerInterceptor.func12\n\tgithub.com/grpc-ecosystem/go-grpc-middleware/v2@v2.1.0/interceptors/server.go:22\ngoogle.golang.org/grpc.getChainUnaryHandler.func1.getChainUnaryHandler.1\n\tgoogle.golang.org/grpc@v1.71.1/server.go:1217\ngo.etcd.io/etcd/server/v3/etcdserver/api/v3rpc.Server.newUnaryInterceptor.func5\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/v3rpc/interceptor.go:74\ngoogle.golang.org/grpc.getChainUnaryHandler.func1\n\tgoogle.golang.org/grpc@v1.71.1/server.go:1217\ngo.etcd.io/etcd/server/v3/etcdserver/api/v3rpc.Server.newLogUnaryInterceptor.func4\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/v3rpc/interceptor.go:81\ngoogle.golang.org/grpc.NewServer.chainUnaryServerInterceptors.chainUnaryInterceptors.func1\n\tgoogle.golang.org/grpc@v1.71.1/server.go:1208\ngo.etcd.io/etcd/api/v3/etcdserverpb._Cluster_MemberPromote_Handler\n\tgo.etcd.io/etcd/api/v3@v3.6.5/etcdserverpb/rpc.pb.go:7434\ngoogle.golang.org/grpc.(*Server).processUnaryRPC\n\tgoo
gle.golang.org/grpc@v1.71.1/server.go:1405\ngoogle.golang.org/grpc.(*Server).handleStream\n\tgoogle.golang.org/grpc@v1.71.1/server.go:1815\ngoogle.golang.org/grpc.(*Server).serveStreams.func2.1\n\tgoogle.golang.org/grpc@v1.71.1/server.go:1035"}
	{"level":"info","ts":"2025-12-16T07:10:39.860043Z","caller":"rafthttp/snapshot_sender.go:131","msg":"sent database snapshot","snapshot-index":4284,"remote-peer-id":"d35424e99d091485","bytes":7185767,"size":"7.2 MB"}
	{"level":"warn","ts":"2025-12-16T07:10:40.078218Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"d35424e99d091485","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:10:40.086705Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"d35424e99d091485","error":"EOF"}
	{"level":"info","ts":"2025-12-16T07:10:40.107883Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(10472946594789291214 12593026477526642892 15227836825827087493)"}
	{"level":"info","ts":"2025-12-16T07:10:40.108345Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","promoted-member-id":"d35424e99d091485"}
	{"level":"info","ts":"2025-12-16T07:10:40.108443Z","caller":"etcdserver/server.go:1768","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"d35424e99d091485"}
	{"level":"warn","ts":"2025-12-16T07:10:40.127605Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"d35424e99d091485","error":"failed to write d35424e99d091485 on stream MsgApp v2 (write tcp 192.168.49.2:2380->192.168.49.6:36916: write: broken pipe)"}
	{"level":"warn","ts":"2025-12-16T07:10:40.136978Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"d35424e99d091485"}
	{"level":"warn","ts":"2025-12-16T07:10:40.176428Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"d35424e99d091485"}
	{"level":"info","ts":"2025-12-16T07:10:40.305614Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"d35424e99d091485"}
	{"level":"info","ts":"2025-12-16T07:10:40.718462Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"d35424e99d091485","stream-type":"stream Message"}
	{"level":"info","ts":"2025-12-16T07:10:40.718538Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"d35424e99d091485"}
	{"level":"info","ts":"2025-12-16T07:10:40.836583Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"d35424e99d091485"}
	{"level":"info","ts":"2025-12-16T07:10:40.981482Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"d35424e99d091485"}
	{"level":"info","ts":"2025-12-16T07:10:41.015377Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"d35424e99d091485","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-12-16T07:10:41.015427Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"d35424e99d091485"}
	{"level":"info","ts":"2025-12-16T07:10:53.613011Z","caller":"etcdserver/server.go:2262","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-12-16T07:10:59.787157Z","caller":"etcdserver/server.go:2262","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"warn","ts":"2025-12-16T07:11:00.315714Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"141.967369ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128042014890400324 > lease_revoke:<id:70cc9b25f8fe19aa>","response":"size:29"}
	{"level":"info","ts":"2025-12-16T07:11:09.861064Z","caller":"etcdserver/server.go:1872","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"d35424e99d091485","bytes":7185767,"size":"7.2 MB","took":"30.47994317s"}
	{"level":"warn","ts":"2025-12-16T07:11:40.625183Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"157.434408ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" limit:500 ","response":"range_response_count:500 size:368124"}
	{"level":"info","ts":"2025-12-16T07:11:40.625262Z","caller":"traceutil/trace.go:172","msg":"trace[357865829] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:500; response_revision:3901; }","duration":"157.520481ms","start":"2025-12-16T07:11:40.467722Z","end":"2025-12-16T07:11:40.625242Z","steps":["trace[357865829] 'range keys from bolt db'  (duration: 156.41957ms)"],"step_count":1}
	
	
	==> kernel <==
	 07:11:40 up  9:54,  0 user,  load average: 1.03, 1.60, 1.58
	Linux ha-614518 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6e64e350bfcdb0ad3cefabf63e1a4acc10762dcf6c5cfb20629a03af5db77445] <==
	I1216 07:11:03.121875       1 main.go:324] Node ha-614518-m02 has CIDR [10.244.1.0/24] 
	I1216 07:11:13.121638       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 07:11:13.121769       1 main.go:301] handling current node
	I1216 07:11:13.121810       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1216 07:11:13.121840       1 main.go:324] Node ha-614518-m02 has CIDR [10.244.1.0/24] 
	I1216 07:11:13.121991       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1216 07:11:13.122011       1 main.go:324] Node ha-614518-m04 has CIDR [10.244.3.0/24] 
	I1216 07:11:13.122073       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1216 07:11:13.122086       1 main.go:324] Node ha-614518-m05 has CIDR [10.244.2.0/24] 
	I1216 07:11:23.121036       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1216 07:11:23.121072       1 main.go:324] Node ha-614518-m02 has CIDR [10.244.1.0/24] 
	I1216 07:11:23.121215       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1216 07:11:23.121230       1 main.go:324] Node ha-614518-m04 has CIDR [10.244.3.0/24] 
	I1216 07:11:23.121291       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1216 07:11:23.121304       1 main.go:324] Node ha-614518-m05 has CIDR [10.244.2.0/24] 
	I1216 07:11:23.122044       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 07:11:23.122066       1 main.go:301] handling current node
	I1216 07:11:33.121533       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1216 07:11:33.121564       1 main.go:324] Node ha-614518-m02 has CIDR [10.244.1.0/24] 
	I1216 07:11:33.121744       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1216 07:11:33.121759       1 main.go:324] Node ha-614518-m04 has CIDR [10.244.3.0/24] 
	I1216 07:11:33.121818       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1216 07:11:33.121828       1 main.go:324] Node ha-614518-m05 has CIDR [10.244.2.0/24] 
	I1216 07:11:33.121879       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 07:11:33.121890       1 main.go:301] handling current node
	
	
	==> kube-apiserver [db591d0d437f81b8c65552b6efbd2ca8fb29bb1e0989d62b2cce8be69b46105c] <==
	{"level":"warn","ts":"2025-12-16T07:05:36.946250Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40030372c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.946270Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40010305a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.946293Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001f4b680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.946313Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001cd7c20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.946331Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001f4a780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.946455Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40018ae3c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.946777Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001cd65a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953177Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40026a5680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953240Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400137ba40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953261Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001c85c20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953280Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001c84000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953301Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400274cb40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953323Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40018563c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953342Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40018574a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953358Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001cd65a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953376Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001856d20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953393Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001cd72c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953410Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40017050e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953428Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002be8f00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953452Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002213c20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953475Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400286b2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-16T07:05:36.953492Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001f4a000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	I1216 07:05:52.809366       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	W1216 07:06:04.987898       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1216 07:10:53.669592       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [62f5148caf57328eb2231340bd1f0fda0819319965c786abfdb83aeb5ed01f5e] <==
	I1216 07:06:32.763628       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1216 07:06:32.767053       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 07:06:32.771968       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 07:06:32.772032       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1216 07:06:32.772041       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1216 07:06:32.772555       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1216 07:06:32.776046       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1216 07:06:32.776052       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1216 07:06:32.786517       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1216 07:06:32.786581       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1216 07:06:32.786621       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1216 07:06:32.786648       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1216 07:06:32.786659       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1216 07:06:32.786673       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1216 07:06:32.798877       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 07:06:32.805236       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1216 07:06:32.809401       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1216 07:06:32.814713       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	E1216 07:10:52.668314       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-dgb4s failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-dgb4s\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1216 07:10:52.693279       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-dgb4s failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-dgb4s\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1216 07:10:53.234790       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-614518-m05\" does not exist"
	I1216 07:10:53.234953       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-614518-m04"
	I1216 07:10:53.310276       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-614518-m05" podCIDRs=["10.244.2.0/24"]
	I1216 07:10:57.840907       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-614518-m05"
	I1216 07:11:36.235628       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-614518-m04"
	
	
	==> kube-controller-manager [95092e298b4a275cf751be03abcd8305d183bb3b40e3bc28150dc77bb5adf478] <==
	I1216 07:05:28.645623       1 serving.go:386] Generated self-signed cert in-memory
	I1216 07:05:29.867986       1 controllermanager.go:191] "Starting" version="v1.34.2"
	I1216 07:05:29.868027       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 07:05:29.870775       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1216 07:05:29.870890       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1216 07:05:29.871397       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1216 07:05:29.871485       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1216 07:05:41.889989       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-proxy [df7febb900c92c1ec552f11013f0ffc72f6a301ff2a34356063a3a3d5508e6f6] <==
	E1216 07:04:12.258444       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-614518&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1216 07:04:21.248820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-614518&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1216 07:04:33.125461       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-614518&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1216 07:04:58.756889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-614518&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1216 07:05:31.552851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-614518&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1216 07:06:20.432877       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 07:06:20.432912       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1216 07:06:20.432992       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 07:06:20.451656       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 07:06:20.451712       1 server_linux.go:132] "Using iptables Proxier"
	I1216 07:06:20.455545       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 07:06:20.455867       1 server.go:527] "Version info" version="v1.34.2"
	I1216 07:06:20.455889       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 07:06:20.457590       1 config.go:200] "Starting service config controller"
	I1216 07:06:20.457611       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 07:06:20.457630       1 config.go:106] "Starting endpoint slice config controller"
	I1216 07:06:20.457635       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 07:06:20.457646       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 07:06:20.457649       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 07:06:20.458370       1 config.go:309] "Starting node config controller"
	I1216 07:06:20.458392       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 07:06:20.458399       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 07:06:20.558370       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1216 07:06:20.558388       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 07:06:20.558421       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c0e9d15ebb1cd884461c491d76b9c135253b28403f1a18a97c1bdb68443fe858] <==
	E1216 07:04:00.841266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1216 07:04:00.859797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1216 07:04:00.879356       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1216 07:04:00.886912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1216 07:04:00.919634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1216 07:04:00.958908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1216 07:04:00.959058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1216 07:04:01.001026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1216 07:04:01.006661       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1216 07:04:01.010174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1216 07:04:01.037770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1216 07:04:01.074332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1216 07:04:01.101325       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1216 07:04:01.113105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1216 07:04:01.257180       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1216 07:04:01.380284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1216 07:04:04.392043       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1216 07:10:53.366660       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-g6h5z\": pod kindnet-g6h5z is already assigned to node \"ha-614518-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-g6h5z" node="ha-614518-m05"
	E1216 07:10:53.366729       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod c19534b3-4327-4a56-b764-c9fbf1516fd3(kube-system/kindnet-g6h5z) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-g6h5z"
	E1216 07:10:53.366754       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-g6h5z\": pod kindnet-g6h5z is already assigned to node \"ha-614518-m05\"" logger="UnhandledError" pod="kube-system/kindnet-g6h5z"
	I1216 07:10:53.367891       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-g6h5z" node="ha-614518-m05"
	E1216 07:10:53.408427       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5x2t4\": pod kube-proxy-5x2t4 is already assigned to node \"ha-614518-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5x2t4" node="ha-614518-m05"
	E1216 07:10:53.408594       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod f6ab22f8-d9a1-440e-9992-f60521622b41(kube-system/kube-proxy-5x2t4) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-5x2t4"
	E1216 07:10:53.408627       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5x2t4\": pod kube-proxy-5x2t4 is already assigned to node \"ha-614518-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-5x2t4"
	I1216 07:10:53.426017       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5x2t4" node="ha-614518-m05"
	
	
	==> kubelet <==
	Dec 16 07:05:41 ha-614518 kubelet[805]: E1216 07:05:41.963351     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-614518_kube-system(1520b3299dadf726cb27cf58cec25cd2)\"" pod="kube-system/kube-controller-manager-ha-614518" podUID="1520b3299dadf726cb27cf58cec25cd2"
	Dec 16 07:05:43 ha-614518 kubelet[805]: I1216 07:05:43.172588     805 scope.go:117] "RemoveContainer" containerID="95092e298b4a275cf751be03abcd8305d183bb3b40e3bc28150dc77bb5adf478"
	Dec 16 07:05:43 ha-614518 kubelet[805]: E1216 07:05:43.173239     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-614518_kube-system(1520b3299dadf726cb27cf58cec25cd2)\"" pod="kube-system/kube-controller-manager-ha-614518" podUID="1520b3299dadf726cb27cf58cec25cd2"
	Dec 16 07:05:47 ha-614518 kubelet[805]: I1216 07:05:47.980385     805 scope.go:117] "RemoveContainer" containerID="1b90f35e8fe79482d5c14218f1e2e65c47d65394a6eeb0612fbb2b19206d27c7"
	Dec 16 07:05:47 ha-614518 kubelet[805]: I1216 07:05:47.980741     805 scope.go:117] "RemoveContainer" containerID="4b611d8c213d6b291fb7a3b72450bf97b5b458e31038413638c4e1e9a6beaaf7"
	Dec 16 07:05:47 ha-614518 kubelet[805]: E1216 07:05:47.980882     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c8b9c00b-10bc-423c-b16e-3f3cdb12e907)\"" pod="kube-system/storage-provisioner" podUID="c8b9c00b-10bc-423c-b16e-3f3cdb12e907"
	Dec 16 07:05:51 ha-614518 kubelet[805]: I1216 07:05:51.159869     805 scope.go:117] "RemoveContainer" containerID="95092e298b4a275cf751be03abcd8305d183bb3b40e3bc28150dc77bb5adf478"
	Dec 16 07:05:51 ha-614518 kubelet[805]: E1216 07:05:51.160099     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-614518_kube-system(1520b3299dadf726cb27cf58cec25cd2)\"" pod="kube-system/kube-controller-manager-ha-614518" podUID="1520b3299dadf726cb27cf58cec25cd2"
	Dec 16 07:05:59 ha-614518 kubelet[805]: I1216 07:05:59.515424     805 scope.go:117] "RemoveContainer" containerID="4b611d8c213d6b291fb7a3b72450bf97b5b458e31038413638c4e1e9a6beaaf7"
	Dec 16 07:05:59 ha-614518 kubelet[805]: E1216 07:05:59.516065     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c8b9c00b-10bc-423c-b16e-3f3cdb12e907)\"" pod="kube-system/storage-provisioner" podUID="c8b9c00b-10bc-423c-b16e-3f3cdb12e907"
	Dec 16 07:06:02 ha-614518 kubelet[805]: I1216 07:06:02.518762     805 scope.go:117] "RemoveContainer" containerID="95092e298b4a275cf751be03abcd8305d183bb3b40e3bc28150dc77bb5adf478"
	Dec 16 07:06:02 ha-614518 kubelet[805]: E1216 07:06:02.519407     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-614518_kube-system(1520b3299dadf726cb27cf58cec25cd2)\"" pod="kube-system/kube-controller-manager-ha-614518" podUID="1520b3299dadf726cb27cf58cec25cd2"
	Dec 16 07:06:10 ha-614518 kubelet[805]: I1216 07:06:10.515860     805 scope.go:117] "RemoveContainer" containerID="4b611d8c213d6b291fb7a3b72450bf97b5b458e31038413638c4e1e9a6beaaf7"
	Dec 16 07:06:16 ha-614518 kubelet[805]: I1216 07:06:16.515241     805 scope.go:117] "RemoveContainer" containerID="95092e298b4a275cf751be03abcd8305d183bb3b40e3bc28150dc77bb5adf478"
	Dec 16 07:06:16 ha-614518 kubelet[805]: E1216 07:06:16.515866     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-614518_kube-system(1520b3299dadf726cb27cf58cec25cd2)\"" pod="kube-system/kube-controller-manager-ha-614518" podUID="1520b3299dadf726cb27cf58cec25cd2"
	Dec 16 07:06:28 ha-614518 kubelet[805]: I1216 07:06:28.515045     805 scope.go:117] "RemoveContainer" containerID="95092e298b4a275cf751be03abcd8305d183bb3b40e3bc28150dc77bb5adf478"
	Dec 16 07:06:41 ha-614518 kubelet[805]: I1216 07:06:41.132515     805 scope.go:117] "RemoveContainer" containerID="4b611d8c213d6b291fb7a3b72450bf97b5b458e31038413638c4e1e9a6beaaf7"
	Dec 16 07:06:41 ha-614518 kubelet[805]: I1216 07:06:41.132828     805 scope.go:117] "RemoveContainer" containerID="5fb83a33391310c66121eddbbc2402a4ccfa716619e5d2b9a5e8333c2cbde2fa"
	Dec 16 07:06:41 ha-614518 kubelet[805]: E1216 07:06:41.132959     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c8b9c00b-10bc-423c-b16e-3f3cdb12e907)\"" pod="kube-system/storage-provisioner" podUID="c8b9c00b-10bc-423c-b16e-3f3cdb12e907"
	Dec 16 07:06:52 ha-614518 kubelet[805]: E1216 07:06:52.544364     805 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/1e1bffb0be7696eafc690b57ae72d068d188db906113cb72328c74f36504929d/diff" to get inode usage: stat /var/lib/containers/storage/overlay/1e1bffb0be7696eafc690b57ae72d068d188db906113cb72328c74f36504929d/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_storage-provisioner_c8b9c00b-10bc-423c-b16e-3f3cdb12e907/storage-provisioner/5.log" to get inode usage: stat /var/log/pods/kube-system_storage-provisioner_c8b9c00b-10bc-423c-b16e-3f3cdb12e907/storage-provisioner/5.log: no such file or directory
	Dec 16 07:06:56 ha-614518 kubelet[805]: I1216 07:06:56.515333     805 scope.go:117] "RemoveContainer" containerID="5fb83a33391310c66121eddbbc2402a4ccfa716619e5d2b9a5e8333c2cbde2fa"
	Dec 16 07:06:56 ha-614518 kubelet[805]: E1216 07:06:56.515497     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c8b9c00b-10bc-423c-b16e-3f3cdb12e907)\"" pod="kube-system/storage-provisioner" podUID="c8b9c00b-10bc-423c-b16e-3f3cdb12e907"
	Dec 16 07:07:08 ha-614518 kubelet[805]: I1216 07:07:08.515499     805 scope.go:117] "RemoveContainer" containerID="5fb83a33391310c66121eddbbc2402a4ccfa716619e5d2b9a5e8333c2cbde2fa"
	Dec 16 07:07:08 ha-614518 kubelet[805]: E1216 07:07:08.515687     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c8b9c00b-10bc-423c-b16e-3f3cdb12e907)\"" pod="kube-system/storage-provisioner" podUID="c8b9c00b-10bc-423c-b16e-3f3cdb12e907"
	Dec 16 07:07:21 ha-614518 kubelet[805]: I1216 07:07:21.515729     805 scope.go:117] "RemoveContainer" containerID="5fb83a33391310c66121eddbbc2402a4ccfa716619e5d2b9a5e8333c2cbde2fa"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-614518 -n ha-614518
helpers_test.go:270: (dbg) Run:  kubectl --context ha-614518 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (4.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.51s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-770419 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-770419 --output=json --user=testUser: exit status 80 (2.512477051s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4a3f0d4b-1f95-44c2-9ee6-4a83ca424691","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-770419 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"aea83975-606a-47fb-aa22-0f38f777517c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-16T07:13:17Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"69ea0e8c-4577-4f65-b662-8d9d9c1aecc8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-770419 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.51s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.85s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-770419 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-770419 --output=json --user=testUser: exit status 80 (1.848607881s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"48f39e04-04c5-49ba-8c43-b3d6c8da868d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-770419 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"ca7c4d31-041c-4236-9b08-71767b511332","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-16T07:13:19Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"b2dd566f-f384-445d-872e-af769da6d23c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-770419 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.85s)

                                                
                                    
x
+
TestKubernetesUpgrade (784.45s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-530870 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-530870 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (36.446518963s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-530870
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-530870: (1.375463147s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-530870 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-530870 status --format={{.Host}}: exit status 7 (109.027952ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-530870 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1216 07:38:08.326109 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-487532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-530870 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 109 (12m20.941569278s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-530870] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22141
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-530870" primary control-plane node in "kubernetes-upgrade-530870" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 07:37:00.913362 1798136 out.go:360] Setting OutFile to fd 1 ...
	I1216 07:37:00.913553 1798136 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 07:37:00.913562 1798136 out.go:374] Setting ErrFile to fd 2...
	I1216 07:37:00.913575 1798136 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 07:37:00.913920 1798136 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 07:37:00.914419 1798136 out.go:368] Setting JSON to false
	I1216 07:37:00.915551 1798136 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":37172,"bootTime":1765833449,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1216 07:37:00.915622 1798136 start.go:143] virtualization:  
	I1216 07:37:00.918907 1798136 out.go:179] * [kubernetes-upgrade-530870] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 07:37:00.922753 1798136 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 07:37:00.922852 1798136 notify.go:221] Checking for updates...
	I1216 07:37:00.932288 1798136 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 07:37:00.936224 1798136 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 07:37:00.939128 1798136 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube
	I1216 07:37:00.941876 1798136 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 07:37:00.944750 1798136 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 07:37:00.948090 1798136 config.go:182] Loaded profile config "kubernetes-upgrade-530870": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1216 07:37:00.948812 1798136 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 07:37:00.993068 1798136 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 07:37:00.993194 1798136 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 07:37:01.080439 1798136 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 07:37:01.06951693 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 07:37:01.080562 1798136 docker.go:319] overlay module found
	I1216 07:37:01.084923 1798136 out.go:179] * Using the docker driver based on existing profile
	I1216 07:37:01.087789 1798136 start.go:309] selected driver: docker
	I1216 07:37:01.087810 1798136 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-530870 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-530870 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 07:37:01.087913 1798136 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 07:37:01.088845 1798136 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 07:37:01.182991 1798136 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 07:37:01.171440445 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 07:37:01.183328 1798136 cni.go:84] Creating CNI manager for ""
	I1216 07:37:01.183387 1798136 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 07:37:01.183426 1798136 start.go:353] cluster config:
	{Name:kubernetes-upgrade-530870 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-530870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 07:37:01.186943 1798136 out.go:179] * Starting "kubernetes-upgrade-530870" primary control-plane node in "kubernetes-upgrade-530870" cluster
	I1216 07:37:01.190245 1798136 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 07:37:01.193257 1798136 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 07:37:01.196167 1798136 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 07:37:01.196222 1798136 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1216 07:37:01.196233 1798136 cache.go:65] Caching tarball of preloaded images
	I1216 07:37:01.196327 1798136 preload.go:238] Found /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 07:37:01.196337 1798136 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1216 07:37:01.196449 1798136 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/kubernetes-upgrade-530870/config.json ...
	I1216 07:37:01.196685 1798136 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 07:37:01.226534 1798136 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 07:37:01.226555 1798136 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 07:37:01.226571 1798136 cache.go:243] Successfully downloaded all kic artifacts
	I1216 07:37:01.226839 1798136 start.go:360] acquireMachinesLock for kubernetes-upgrade-530870: {Name:mk52d20db8d08f553244ad4973dfee196a718106 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 07:37:01.226974 1798136 start.go:364] duration metric: took 97.88µs to acquireMachinesLock for "kubernetes-upgrade-530870"
	I1216 07:37:01.227000 1798136 start.go:96] Skipping create...Using existing machine configuration
	I1216 07:37:01.227005 1798136 fix.go:54] fixHost starting: 
	I1216 07:37:01.227287 1798136 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-530870 --format={{.State.Status}}
	I1216 07:37:01.261497 1798136 fix.go:112] recreateIfNeeded on kubernetes-upgrade-530870: state=Stopped err=<nil>
	W1216 07:37:01.261528 1798136 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 07:37:01.264695 1798136 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-530870" ...
	I1216 07:37:01.264795 1798136 cli_runner.go:164] Run: docker start kubernetes-upgrade-530870
	I1216 07:37:01.619592 1798136 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-530870 --format={{.State.Status}}
	I1216 07:37:01.661557 1798136 kic.go:430] container "kubernetes-upgrade-530870" state is running.
	I1216 07:37:01.661966 1798136 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-530870
	I1216 07:37:01.696695 1798136 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/kubernetes-upgrade-530870/config.json ...
	I1216 07:37:01.696929 1798136 machine.go:94] provisionDockerMachine start ...
	I1216 07:37:01.696994 1798136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-530870
	I1216 07:37:01.726553 1798136 main.go:143] libmachine: Using SSH client type: native
	I1216 07:37:01.726877 1798136 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34510 <nil> <nil>}
	I1216 07:37:01.726886 1798136 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 07:37:01.727504 1798136 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50598->127.0.0.1:34510: read: connection reset by peer
	I1216 07:37:04.884606 1798136 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-530870
	
	I1216 07:37:04.884632 1798136 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-530870"
	I1216 07:37:04.884725 1798136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-530870
	I1216 07:37:04.911029 1798136 main.go:143] libmachine: Using SSH client type: native
	I1216 07:37:04.911352 1798136 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34510 <nil> <nil>}
	I1216 07:37:04.911373 1798136 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-530870 && echo "kubernetes-upgrade-530870" | sudo tee /etc/hostname
	I1216 07:37:05.074267 1798136 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-530870
	
	I1216 07:37:05.074390 1798136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-530870
	I1216 07:37:05.095768 1798136 main.go:143] libmachine: Using SSH client type: native
	I1216 07:37:05.096089 1798136 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34510 <nil> <nil>}
	I1216 07:37:05.096106 1798136 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-530870' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-530870/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-530870' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 07:37:05.245280 1798136 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 07:37:05.245321 1798136 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-1596013/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-1596013/.minikube}
	I1216 07:37:05.245358 1798136 ubuntu.go:190] setting up certificates
	I1216 07:37:05.245385 1798136 provision.go:84] configureAuth start
	I1216 07:37:05.245490 1798136 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-530870
	I1216 07:37:05.269511 1798136 provision.go:143] copyHostCerts
	I1216 07:37:05.269588 1798136 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem, removing ...
	I1216 07:37:05.269600 1798136 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 07:37:05.269680 1798136 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem (1078 bytes)
	I1216 07:37:05.269797 1798136 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem, removing ...
	I1216 07:37:05.269808 1798136 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 07:37:05.269836 1798136 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem (1123 bytes)
	I1216 07:37:05.269894 1798136 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem, removing ...
	I1216 07:37:05.269904 1798136 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 07:37:05.269930 1798136 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem (1675 bytes)
	I1216 07:37:05.269983 1798136 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-530870 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-530870 localhost minikube]
	I1216 07:37:05.351601 1798136 provision.go:177] copyRemoteCerts
	I1216 07:37:05.355442 1798136 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 07:37:05.355515 1798136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-530870
	I1216 07:37:05.377452 1798136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34510 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/kubernetes-upgrade-530870/id_rsa Username:docker}
	I1216 07:37:05.472246 1798136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 07:37:05.490675 1798136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 07:37:05.508238 1798136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1216 07:37:05.527162 1798136 provision.go:87] duration metric: took 281.736232ms to configureAuth
	I1216 07:37:05.527189 1798136 ubuntu.go:206] setting minikube options for container-runtime
	I1216 07:37:05.527383 1798136 config.go:182] Loaded profile config "kubernetes-upgrade-530870": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 07:37:05.527501 1798136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-530870
	I1216 07:37:05.545415 1798136 main.go:143] libmachine: Using SSH client type: native
	I1216 07:37:05.545730 1798136 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34510 <nil> <nil>}
	I1216 07:37:05.545752 1798136 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 07:37:05.865703 1798136 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 07:37:05.865792 1798136 machine.go:97] duration metric: took 4.168853183s to provisionDockerMachine
	I1216 07:37:05.865819 1798136 start.go:293] postStartSetup for "kubernetes-upgrade-530870" (driver="docker")
	I1216 07:37:05.865849 1798136 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 07:37:05.865986 1798136 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 07:37:05.866063 1798136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-530870
	I1216 07:37:05.883794 1798136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34510 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/kubernetes-upgrade-530870/id_rsa Username:docker}
	I1216 07:37:05.980426 1798136 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 07:37:05.983725 1798136 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 07:37:05.983756 1798136 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 07:37:05.983769 1798136 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/addons for local assets ...
	I1216 07:37:05.983827 1798136 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/files for local assets ...
	I1216 07:37:05.983914 1798136 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> 15992552.pem in /etc/ssl/certs
	I1216 07:37:05.984024 1798136 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 07:37:05.991711 1798136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 07:37:06.017770 1798136 start.go:296] duration metric: took 151.920246ms for postStartSetup
	I1216 07:37:06.017859 1798136 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 07:37:06.017923 1798136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-530870
	I1216 07:37:06.040575 1798136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34510 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/kubernetes-upgrade-530870/id_rsa Username:docker}
	I1216 07:37:06.134244 1798136 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 07:37:06.139275 1798136 fix.go:56] duration metric: took 4.912262032s for fixHost
	I1216 07:37:06.139310 1798136 start.go:83] releasing machines lock for "kubernetes-upgrade-530870", held for 4.912317671s
	I1216 07:37:06.139384 1798136 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-530870
	I1216 07:37:06.157623 1798136 ssh_runner.go:195] Run: cat /version.json
	I1216 07:37:06.157692 1798136 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 07:37:06.157756 1798136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-530870
	I1216 07:37:06.157699 1798136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-530870
	I1216 07:37:06.180254 1798136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34510 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/kubernetes-upgrade-530870/id_rsa Username:docker}
	I1216 07:37:06.186248 1798136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34510 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/kubernetes-upgrade-530870/id_rsa Username:docker}
	I1216 07:37:06.272841 1798136 ssh_runner.go:195] Run: systemctl --version
	I1216 07:37:06.387746 1798136 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 07:37:06.424326 1798136 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 07:37:06.428750 1798136 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 07:37:06.428851 1798136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 07:37:06.437707 1798136 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 07:37:06.437773 1798136 start.go:496] detecting cgroup driver to use...
	I1216 07:37:06.437821 1798136 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 07:37:06.437879 1798136 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 07:37:06.459721 1798136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 07:37:06.473655 1798136 docker.go:218] disabling cri-docker service (if available) ...
	I1216 07:37:06.473721 1798136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 07:37:06.489532 1798136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 07:37:06.502856 1798136 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 07:37:06.619432 1798136 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 07:37:06.735889 1798136 docker.go:234] disabling docker service ...
	I1216 07:37:06.735953 1798136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 07:37:06.750729 1798136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 07:37:06.764882 1798136 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 07:37:06.878760 1798136 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 07:37:06.997244 1798136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 07:37:07.011868 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 07:37:07.026285 1798136 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 07:37:07.026351 1798136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:37:07.035135 1798136 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 07:37:07.035274 1798136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:37:07.044462 1798136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:37:07.053649 1798136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:37:07.062557 1798136 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 07:37:07.071006 1798136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:37:07.081022 1798136 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:37:07.089763 1798136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:37:07.098732 1798136 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 07:37:07.106596 1798136 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 07:37:07.114414 1798136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 07:37:07.236701 1798136 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 07:37:07.400231 1798136 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 07:37:07.400346 1798136 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 07:37:07.404058 1798136 start.go:564] Will wait 60s for crictl version
	I1216 07:37:07.404167 1798136 ssh_runner.go:195] Run: which crictl
	I1216 07:37:07.407492 1798136 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 07:37:07.431798 1798136 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 07:37:07.431923 1798136 ssh_runner.go:195] Run: crio --version
	I1216 07:37:07.466185 1798136 ssh_runner.go:195] Run: crio --version
	I1216 07:37:07.514734 1798136 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1216 07:37:07.517750 1798136 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-530870 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 07:37:07.545272 1798136 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1216 07:37:07.549760 1798136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 07:37:07.560309 1798136 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-530870 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-530870 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 07:37:07.560432 1798136 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 07:37:07.560506 1798136 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 07:37:07.613427 1798136 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1216 07:37:07.613496 1798136 ssh_runner.go:195] Run: which lz4
	I1216 07:37:07.617281 1798136 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 07:37:07.620972 1798136 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 07:37:07.621008 1798136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (306100841 bytes)
	I1216 07:37:09.479535 1798136 crio.go:462] duration metric: took 1.862302386s to copy over tarball
	I1216 07:37:09.479699 1798136 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 07:37:11.537638 1798136 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.057890608s)
	I1216 07:37:11.537669 1798136 crio.go:469] duration metric: took 2.058053908s to extract the tarball
	I1216 07:37:11.537677 1798136 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 07:37:11.583764 1798136 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 07:37:11.623276 1798136 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 07:37:11.623299 1798136 cache_images.go:86] Images are preloaded, skipping loading
	I1216 07:37:11.623307 1798136 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1216 07:37:11.623413 1798136 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-530870 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-530870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 07:37:11.623499 1798136 ssh_runner.go:195] Run: crio config
	I1216 07:37:11.706224 1798136 cni.go:84] Creating CNI manager for ""
	I1216 07:37:11.706246 1798136 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 07:37:11.706265 1798136 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 07:37:11.706308 1798136 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-530870 NodeName:kubernetes-upgrade-530870 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 07:37:11.706452 1798136 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-530870"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 07:37:11.706531 1798136 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 07:37:11.715569 1798136 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 07:37:11.715669 1798136 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 07:37:11.723260 1798136 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1216 07:37:11.739034 1798136 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 07:37:11.751681 1798136 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1216 07:37:11.764690 1798136 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1216 07:37:11.768294 1798136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 07:37:11.778709 1798136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 07:37:11.901094 1798136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 07:37:11.916919 1798136 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/kubernetes-upgrade-530870 for IP: 192.168.76.2
	I1216 07:37:11.916941 1798136 certs.go:195] generating shared ca certs ...
	I1216 07:37:11.916957 1798136 certs.go:227] acquiring lock for ca certs: {Name:mkbf72d2e438185e2867d262e148d82e5455cccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:37:11.917092 1798136 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key
	I1216 07:37:11.917139 1798136 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key
	I1216 07:37:11.917151 1798136 certs.go:257] generating profile certs ...
	I1216 07:37:11.917245 1798136 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/kubernetes-upgrade-530870/client.key
	I1216 07:37:11.917312 1798136 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/kubernetes-upgrade-530870/apiserver.key.bcb95075
	I1216 07:37:11.917359 1798136 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/kubernetes-upgrade-530870/proxy-client.key
	I1216 07:37:11.917481 1798136 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem (1338 bytes)
	W1216 07:37:11.917518 1798136 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255_empty.pem, impossibly tiny 0 bytes
	I1216 07:37:11.917533 1798136 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 07:37:11.917561 1798136 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem (1078 bytes)
	I1216 07:37:11.917589 1798136 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem (1123 bytes)
	I1216 07:37:11.917616 1798136 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem (1675 bytes)
	I1216 07:37:11.917664 1798136 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 07:37:11.918247 1798136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 07:37:11.942380 1798136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 07:37:11.968004 1798136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 07:37:11.986924 1798136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 07:37:12.006447 1798136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/kubernetes-upgrade-530870/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1216 07:37:12.026646 1798136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/kubernetes-upgrade-530870/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 07:37:12.045048 1798136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/kubernetes-upgrade-530870/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 07:37:12.063400 1798136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/kubernetes-upgrade-530870/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 07:37:12.081626 1798136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 07:37:12.100207 1798136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem --> /usr/share/ca-certificates/1599255.pem (1338 bytes)
	I1216 07:37:12.117827 1798136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /usr/share/ca-certificates/15992552.pem (1708 bytes)
	I1216 07:37:12.136504 1798136 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 07:37:12.149437 1798136 ssh_runner.go:195] Run: openssl version
	I1216 07:37:12.158563 1798136 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:37:12.166931 1798136 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 07:37:12.177608 1798136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:37:12.182284 1798136 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:37:12.182381 1798136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:37:12.227841 1798136 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 07:37:12.238352 1798136 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1599255.pem
	I1216 07:37:12.245870 1798136 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1599255.pem /etc/ssl/certs/1599255.pem
	I1216 07:37:12.254268 1798136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1599255.pem
	I1216 07:37:12.258683 1798136 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 06:24 /usr/share/ca-certificates/1599255.pem
	I1216 07:37:12.258783 1798136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1599255.pem
	I1216 07:37:12.303724 1798136 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 07:37:12.311885 1798136 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15992552.pem
	I1216 07:37:12.323492 1798136 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15992552.pem /etc/ssl/certs/15992552.pem
	I1216 07:37:12.331516 1798136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15992552.pem
	I1216 07:37:12.336573 1798136 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 06:24 /usr/share/ca-certificates/15992552.pem
	I1216 07:37:12.336672 1798136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15992552.pem
	I1216 07:37:12.378826 1798136 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 07:37:12.386584 1798136 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 07:37:12.391108 1798136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 07:37:12.434635 1798136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 07:37:12.477710 1798136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 07:37:12.520153 1798136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 07:37:12.562891 1798136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 07:37:12.605256 1798136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 07:37:12.654059 1798136 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-530870 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-530870 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 07:37:12.654155 1798136 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 07:37:12.654275 1798136 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 07:37:12.747708 1798136 cri.go:89] found id: ""
	I1216 07:37:12.747856 1798136 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 07:37:12.762009 1798136 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 07:37:12.762033 1798136 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 07:37:12.762117 1798136 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 07:37:12.774705 1798136 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 07:37:12.775472 1798136 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-530870" does not appear in /home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 07:37:12.775803 1798136 kubeconfig.go:62] /home/jenkins/minikube-integration/22141-1596013/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-530870" cluster setting kubeconfig missing "kubernetes-upgrade-530870" context setting]
	I1216 07:37:12.776527 1798136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/kubeconfig: {Name:mk61a8e87d869d27c5acc78145bae6b02a8088a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:37:12.777462 1798136 kapi.go:59] client config for kubernetes-upgrade-530870: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/kubernetes-upgrade-530870/client.crt", KeyFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/kubernetes-upgrade-530870/client.key", CAFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8
(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 07:37:12.778472 1798136 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1216 07:37:12.778497 1798136 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 07:37:12.778503 1798136 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1216 07:37:12.778542 1798136 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 07:37:12.778554 1798136 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1216 07:37:12.778991 1798136 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 07:37:12.797839 1798136 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-16 07:36:38.794466432 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-16 07:37:11.758855812 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///var/run/crio/crio.sock
	   name: "kubernetes-upgrade-530870"
	   kubeletExtraArgs:
	-    node-ip: 192.168.76.2
	+    - name: "node-ip"
	+      value: "192.168.76.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-beta.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1216 07:37:12.797871 1798136 kubeadm.go:1161] stopping kube-system containers ...
	I1216 07:37:12.797887 1798136 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 07:37:12.798021 1798136 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 07:37:12.835616 1798136 cri.go:89] found id: ""
	I1216 07:37:12.835744 1798136 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 07:37:12.871769 1798136 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 07:37:12.880371 1798136 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5643 Dec 16 07:36 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Dec 16 07:36 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec 16 07:36 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Dec 16 07:36 /etc/kubernetes/scheduler.conf
	
	I1216 07:37:12.880463 1798136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 07:37:12.889058 1798136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 07:37:12.897760 1798136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 07:37:12.905847 1798136 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 07:37:12.905935 1798136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 07:37:12.913796 1798136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 07:37:12.924751 1798136 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 07:37:12.924862 1798136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 07:37:12.938612 1798136 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 07:37:12.947758 1798136 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 07:37:13.023527 1798136 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 07:37:14.317661 1798136 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.294101791s)
	I1216 07:37:14.317751 1798136 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 07:37:14.528044 1798136 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 07:37:14.582622 1798136 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 07:37:14.626853 1798136 api_server.go:52] waiting for apiserver process to appear ...
	I1216 07:37:14.626959 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:15.127089 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:15.627290 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:16.127818 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:16.627063 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:17.127204 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:17.627148 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:18.127022 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:18.627884 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:19.127681 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:19.627861 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:20.127104 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:20.627994 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:21.127710 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:21.627773 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:22.127049 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:22.627112 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:23.127092 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:23.627057 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:24.127905 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:24.627800 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:25.128042 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:25.627138 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:26.126999 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:26.627108 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:27.127582 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:27.627161 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:28.127939 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:28.627714 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:29.127132 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:29.627918 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:30.127613 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:30.627732 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:31.127506 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:31.627454 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:32.127665 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:32.627933 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:33.127358 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:33.627114 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:34.127073 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:34.627110 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:35.127905 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:35.627915 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:36.127504 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:36.628050 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:37.127437 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:37.627087 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:38.127200 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:38.627065 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:39.128020 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:39.628049 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:40.127799 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:40.627054 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:41.127971 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:41.627115 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:42.127724 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:42.627088 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:43.127642 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:43.627886 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:44.127777 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:44.627667 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:45.129077 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:45.627142 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:46.127750 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:46.627315 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:47.128040 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:47.627769 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:48.127967 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:48.627067 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:49.127772 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:49.627176 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:50.128017 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:50.628031 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:51.127773 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:51.627934 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:52.127198 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:52.627172 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:53.127827 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:53.627155 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:54.127331 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:54.627061 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:55.127058 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:55.627140 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:56.127574 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:56.627321 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:57.127046 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:57.627249 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:58.127242 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:58.627078 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:59.127410 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:37:59.628037 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:00.128041 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:00.627789 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:01.127797 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:01.627695 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:02.127665 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:02.627950 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:03.127796 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:03.627985 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:04.127782 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:04.628023 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:05.127203 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:05.627703 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:06.127083 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:06.627596 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:07.127425 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:07.627059 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:08.127666 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:08.627886 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:09.127053 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:09.627056 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:10.127979 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:10.627810 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:11.127376 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:11.627364 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:12.127123 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:12.627639 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:13.127725 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:13.628012 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:14.127064 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:14.627027 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:38:14.627112 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:38:14.685890 1798136 cri.go:89] found id: ""
	I1216 07:38:14.685928 1798136 logs.go:282] 0 containers: []
	W1216 07:38:14.685938 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:38:14.685945 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:38:14.686004 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:38:14.731495 1798136 cri.go:89] found id: ""
	I1216 07:38:14.731516 1798136 logs.go:282] 0 containers: []
	W1216 07:38:14.731525 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:38:14.731536 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:38:14.731596 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:38:14.780214 1798136 cri.go:89] found id: ""
	I1216 07:38:14.780236 1798136 logs.go:282] 0 containers: []
	W1216 07:38:14.780245 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:38:14.780251 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:38:14.780310 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:38:14.807278 1798136 cri.go:89] found id: ""
	I1216 07:38:14.807300 1798136 logs.go:282] 0 containers: []
	W1216 07:38:14.807308 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:38:14.807315 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:38:14.807371 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:38:14.840936 1798136 cri.go:89] found id: ""
	I1216 07:38:14.840998 1798136 logs.go:282] 0 containers: []
	W1216 07:38:14.841032 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:38:14.841056 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:38:14.841173 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:38:14.879220 1798136 cri.go:89] found id: ""
	I1216 07:38:14.879243 1798136 logs.go:282] 0 containers: []
	W1216 07:38:14.879252 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:38:14.879269 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:38:14.879327 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:38:14.917021 1798136 cri.go:89] found id: ""
	I1216 07:38:14.917042 1798136 logs.go:282] 0 containers: []
	W1216 07:38:14.917050 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:38:14.917056 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:38:14.917114 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:38:14.961571 1798136 cri.go:89] found id: ""
	I1216 07:38:14.961603 1798136 logs.go:282] 0 containers: []
	W1216 07:38:14.961614 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:38:14.961626 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:38:14.961639 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:38:14.985009 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:38:14.985041 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:38:15.225399 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:38:15.225423 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:38:15.225437 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:38:15.266868 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:38:15.266901 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:38:15.301258 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:38:15.301285 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:38:17.872580 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:17.885312 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:38:17.885380 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:38:17.933747 1798136 cri.go:89] found id: ""
	I1216 07:38:17.933771 1798136 logs.go:282] 0 containers: []
	W1216 07:38:17.933779 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:38:17.933786 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:38:17.933844 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:38:17.976362 1798136 cri.go:89] found id: ""
	I1216 07:38:17.976388 1798136 logs.go:282] 0 containers: []
	W1216 07:38:17.976397 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:38:17.976403 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:38:17.976511 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:38:18.013805 1798136 cri.go:89] found id: ""
	I1216 07:38:18.013832 1798136 logs.go:282] 0 containers: []
	W1216 07:38:18.013841 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:38:18.013848 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:38:18.013913 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:38:18.061922 1798136 cri.go:89] found id: ""
	I1216 07:38:18.061945 1798136 logs.go:282] 0 containers: []
	W1216 07:38:18.061954 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:38:18.061961 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:38:18.062024 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:38:18.100679 1798136 cri.go:89] found id: ""
	I1216 07:38:18.100706 1798136 logs.go:282] 0 containers: []
	W1216 07:38:18.100715 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:38:18.100720 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:38:18.100780 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:38:18.143233 1798136 cri.go:89] found id: ""
	I1216 07:38:18.143259 1798136 logs.go:282] 0 containers: []
	W1216 07:38:18.143268 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:38:18.143274 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:38:18.143333 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:38:18.176145 1798136 cri.go:89] found id: ""
	I1216 07:38:18.176171 1798136 logs.go:282] 0 containers: []
	W1216 07:38:18.176179 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:38:18.176184 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:38:18.176243 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:38:18.220432 1798136 cri.go:89] found id: ""
	I1216 07:38:18.220459 1798136 logs.go:282] 0 containers: []
	W1216 07:38:18.220486 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:38:18.220496 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:38:18.220515 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:38:18.300198 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:38:18.300236 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:38:18.318768 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:38:18.318795 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:38:18.494421 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:38:18.494442 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:38:18.494455 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:38:18.529370 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:38:18.529408 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:38:21.083779 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:21.101274 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:38:21.101339 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:38:21.153976 1798136 cri.go:89] found id: ""
	I1216 07:38:21.153996 1798136 logs.go:282] 0 containers: []
	W1216 07:38:21.154004 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:38:21.154011 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:38:21.154072 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:38:21.183156 1798136 cri.go:89] found id: ""
	I1216 07:38:21.183175 1798136 logs.go:282] 0 containers: []
	W1216 07:38:21.183183 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:38:21.183189 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:38:21.183246 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:38:21.222016 1798136 cri.go:89] found id: ""
	I1216 07:38:21.222035 1798136 logs.go:282] 0 containers: []
	W1216 07:38:21.222053 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:38:21.222060 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:38:21.222124 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:38:21.251234 1798136 cri.go:89] found id: ""
	I1216 07:38:21.251253 1798136 logs.go:282] 0 containers: []
	W1216 07:38:21.251262 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:38:21.251268 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:38:21.251326 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:38:21.280873 1798136 cri.go:89] found id: ""
	I1216 07:38:21.280900 1798136 logs.go:282] 0 containers: []
	W1216 07:38:21.280909 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:38:21.280915 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:38:21.280973 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:38:21.309901 1798136 cri.go:89] found id: ""
	I1216 07:38:21.309920 1798136 logs.go:282] 0 containers: []
	W1216 07:38:21.309928 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:38:21.309934 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:38:21.309993 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:38:21.343410 1798136 cri.go:89] found id: ""
	I1216 07:38:21.343433 1798136 logs.go:282] 0 containers: []
	W1216 07:38:21.343440 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:38:21.343446 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:38:21.343512 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:38:21.379252 1798136 cri.go:89] found id: ""
	I1216 07:38:21.379274 1798136 logs.go:282] 0 containers: []
	W1216 07:38:21.379282 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:38:21.379291 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:38:21.379303 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:38:21.436348 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:38:21.436385 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:38:21.493717 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:38:21.493750 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:38:21.587920 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:38:21.587959 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:38:21.613593 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:38:21.613623 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:38:21.720722 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:38:24.221037 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:24.231210 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:38:24.231279 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:38:24.262610 1798136 cri.go:89] found id: ""
	I1216 07:38:24.262637 1798136 logs.go:282] 0 containers: []
	W1216 07:38:24.262646 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:38:24.262652 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:38:24.262711 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:38:24.289007 1798136 cri.go:89] found id: ""
	I1216 07:38:24.289032 1798136 logs.go:282] 0 containers: []
	W1216 07:38:24.289042 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:38:24.289049 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:38:24.289141 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:38:24.317421 1798136 cri.go:89] found id: ""
	I1216 07:38:24.317464 1798136 logs.go:282] 0 containers: []
	W1216 07:38:24.317474 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:38:24.317480 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:38:24.317543 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:38:24.344029 1798136 cri.go:89] found id: ""
	I1216 07:38:24.344058 1798136 logs.go:282] 0 containers: []
	W1216 07:38:24.344068 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:38:24.344075 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:38:24.344193 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:38:24.379894 1798136 cri.go:89] found id: ""
	I1216 07:38:24.379922 1798136 logs.go:282] 0 containers: []
	W1216 07:38:24.379932 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:38:24.379938 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:38:24.379998 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:38:24.423620 1798136 cri.go:89] found id: ""
	I1216 07:38:24.423646 1798136 logs.go:282] 0 containers: []
	W1216 07:38:24.423654 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:38:24.423660 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:38:24.423716 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:38:24.483722 1798136 cri.go:89] found id: ""
	I1216 07:38:24.483790 1798136 logs.go:282] 0 containers: []
	W1216 07:38:24.483815 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:38:24.483845 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:38:24.483941 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:38:24.549009 1798136 cri.go:89] found id: ""
	I1216 07:38:24.549088 1798136 logs.go:282] 0 containers: []
	W1216 07:38:24.549111 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:38:24.549153 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:38:24.549184 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:38:24.587029 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:38:24.587108 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:38:24.664130 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:38:24.664217 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:38:24.682957 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:38:24.682985 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:38:24.765803 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:38:24.765868 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:38:24.765947 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:38:27.301525 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:27.312014 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:38:27.312090 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:38:27.342380 1798136 cri.go:89] found id: ""
	I1216 07:38:27.342406 1798136 logs.go:282] 0 containers: []
	W1216 07:38:27.342416 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:38:27.342422 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:38:27.342483 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:38:27.371548 1798136 cri.go:89] found id: ""
	I1216 07:38:27.371582 1798136 logs.go:282] 0 containers: []
	W1216 07:38:27.371591 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:38:27.371597 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:38:27.371670 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:38:27.411861 1798136 cri.go:89] found id: ""
	I1216 07:38:27.411887 1798136 logs.go:282] 0 containers: []
	W1216 07:38:27.411896 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:38:27.411903 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:38:27.411963 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:38:27.449664 1798136 cri.go:89] found id: ""
	I1216 07:38:27.449687 1798136 logs.go:282] 0 containers: []
	W1216 07:38:27.449697 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:38:27.449703 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:38:27.449761 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:38:27.478414 1798136 cri.go:89] found id: ""
	I1216 07:38:27.478437 1798136 logs.go:282] 0 containers: []
	W1216 07:38:27.478446 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:38:27.478452 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:38:27.478510 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:38:27.505215 1798136 cri.go:89] found id: ""
	I1216 07:38:27.505243 1798136 logs.go:282] 0 containers: []
	W1216 07:38:27.505252 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:38:27.505259 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:38:27.505322 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:38:27.531792 1798136 cri.go:89] found id: ""
	I1216 07:38:27.531819 1798136 logs.go:282] 0 containers: []
	W1216 07:38:27.531828 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:38:27.531834 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:38:27.531895 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:38:27.558486 1798136 cri.go:89] found id: ""
	I1216 07:38:27.558518 1798136 logs.go:282] 0 containers: []
	W1216 07:38:27.558526 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:38:27.558535 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:38:27.558547 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:38:27.593719 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:38:27.593751 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:38:27.665589 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:38:27.665679 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:38:27.684064 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:38:27.684093 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:38:27.770727 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:38:27.770748 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:38:27.770774 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:38:30.316613 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:30.328057 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:38:30.328147 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:38:30.356037 1798136 cri.go:89] found id: ""
	I1216 07:38:30.356060 1798136 logs.go:282] 0 containers: []
	W1216 07:38:30.356068 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:38:30.356074 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:38:30.356132 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:38:30.403258 1798136 cri.go:89] found id: ""
	I1216 07:38:30.403285 1798136 logs.go:282] 0 containers: []
	W1216 07:38:30.403294 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:38:30.403300 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:38:30.403388 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:38:30.431697 1798136 cri.go:89] found id: ""
	I1216 07:38:30.431723 1798136 logs.go:282] 0 containers: []
	W1216 07:38:30.431732 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:38:30.431738 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:38:30.431794 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:38:30.466432 1798136 cri.go:89] found id: ""
	I1216 07:38:30.466454 1798136 logs.go:282] 0 containers: []
	W1216 07:38:30.466462 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:38:30.466468 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:38:30.466535 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:38:30.504090 1798136 cri.go:89] found id: ""
	I1216 07:38:30.504116 1798136 logs.go:282] 0 containers: []
	W1216 07:38:30.504126 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:38:30.504131 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:38:30.504205 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:38:30.573450 1798136 cri.go:89] found id: ""
	I1216 07:38:30.573472 1798136 logs.go:282] 0 containers: []
	W1216 07:38:30.573481 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:38:30.573487 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:38:30.573545 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:38:30.601356 1798136 cri.go:89] found id: ""
	I1216 07:38:30.601383 1798136 logs.go:282] 0 containers: []
	W1216 07:38:30.601404 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:38:30.601411 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:38:30.601469 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:38:30.633658 1798136 cri.go:89] found id: ""
	I1216 07:38:30.633685 1798136 logs.go:282] 0 containers: []
	W1216 07:38:30.633695 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:38:30.633704 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:38:30.633715 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:38:30.709362 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:38:30.709403 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:38:30.729394 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:38:30.729425 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:38:30.803444 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:38:30.803465 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:38:30.803479 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:38:30.839312 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:38:30.839440 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:38:33.380610 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:33.391444 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:38:33.391512 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:38:33.433682 1798136 cri.go:89] found id: ""
	I1216 07:38:33.433712 1798136 logs.go:282] 0 containers: []
	W1216 07:38:33.433722 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:38:33.433728 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:38:33.433784 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:38:33.462856 1798136 cri.go:89] found id: ""
	I1216 07:38:33.462884 1798136 logs.go:282] 0 containers: []
	W1216 07:38:33.462931 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:38:33.462942 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:38:33.463014 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:38:33.498885 1798136 cri.go:89] found id: ""
	I1216 07:38:33.498915 1798136 logs.go:282] 0 containers: []
	W1216 07:38:33.498924 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:38:33.498930 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:38:33.498997 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:38:33.529361 1798136 cri.go:89] found id: ""
	I1216 07:38:33.529400 1798136 logs.go:282] 0 containers: []
	W1216 07:38:33.529410 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:38:33.529417 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:38:33.529482 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:38:33.558559 1798136 cri.go:89] found id: ""
	I1216 07:38:33.558581 1798136 logs.go:282] 0 containers: []
	W1216 07:38:33.558589 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:38:33.558595 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:38:33.558656 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:38:33.586686 1798136 cri.go:89] found id: ""
	I1216 07:38:33.586709 1798136 logs.go:282] 0 containers: []
	W1216 07:38:33.586718 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:38:33.586724 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:38:33.586781 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:38:33.618816 1798136 cri.go:89] found id: ""
	I1216 07:38:33.618843 1798136 logs.go:282] 0 containers: []
	W1216 07:38:33.618852 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:38:33.618858 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:38:33.618916 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:38:33.657818 1798136 cri.go:89] found id: ""
	I1216 07:38:33.657841 1798136 logs.go:282] 0 containers: []
	W1216 07:38:33.657850 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:38:33.657860 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:38:33.657871 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:38:33.742795 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:38:33.742887 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:38:33.775576 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:38:33.775658 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:38:33.851904 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:38:33.851992 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:38:33.869473 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:38:33.869501 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:38:33.950576 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:38:36.451260 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:36.464942 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:38:36.465015 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:38:36.509064 1798136 cri.go:89] found id: ""
	I1216 07:38:36.509086 1798136 logs.go:282] 0 containers: []
	W1216 07:38:36.509095 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:38:36.509101 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:38:36.509164 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:38:36.551444 1798136 cri.go:89] found id: ""
	I1216 07:38:36.551470 1798136 logs.go:282] 0 containers: []
	W1216 07:38:36.551478 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:38:36.551485 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:38:36.551544 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:38:36.585717 1798136 cri.go:89] found id: ""
	I1216 07:38:36.585742 1798136 logs.go:282] 0 containers: []
	W1216 07:38:36.585751 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:38:36.585757 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:38:36.585814 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:38:36.612258 1798136 cri.go:89] found id: ""
	I1216 07:38:36.612283 1798136 logs.go:282] 0 containers: []
	W1216 07:38:36.612293 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:38:36.612299 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:38:36.612354 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:38:36.638697 1798136 cri.go:89] found id: ""
	I1216 07:38:36.638721 1798136 logs.go:282] 0 containers: []
	W1216 07:38:36.638729 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:38:36.638746 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:38:36.638806 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:38:36.701454 1798136 cri.go:89] found id: ""
	I1216 07:38:36.701478 1798136 logs.go:282] 0 containers: []
	W1216 07:38:36.701488 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:38:36.701494 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:38:36.701558 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:38:36.769185 1798136 cri.go:89] found id: ""
	I1216 07:38:36.769211 1798136 logs.go:282] 0 containers: []
	W1216 07:38:36.769220 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:38:36.769226 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:38:36.769286 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:38:36.802881 1798136 cri.go:89] found id: ""
	I1216 07:38:36.802907 1798136 logs.go:282] 0 containers: []
	W1216 07:38:36.802917 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:38:36.802925 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:38:36.802937 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:38:36.879319 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:38:36.879399 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:38:36.895899 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:38:36.895976 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:38:36.976381 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:38:36.976415 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:38:36.976428 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:38:37.007923 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:38:37.007966 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:38:39.543406 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:39.557617 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:38:39.557698 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:38:39.594926 1798136 cri.go:89] found id: ""
	I1216 07:38:39.594948 1798136 logs.go:282] 0 containers: []
	W1216 07:38:39.594956 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:38:39.594963 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:38:39.595020 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:38:39.622607 1798136 cri.go:89] found id: ""
	I1216 07:38:39.622630 1798136 logs.go:282] 0 containers: []
	W1216 07:38:39.622639 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:38:39.622645 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:38:39.622703 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:38:39.674730 1798136 cri.go:89] found id: ""
	I1216 07:38:39.674807 1798136 logs.go:282] 0 containers: []
	W1216 07:38:39.674819 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:38:39.674827 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:38:39.674925 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:38:39.716625 1798136 cri.go:89] found id: ""
	I1216 07:38:39.716701 1798136 logs.go:282] 0 containers: []
	W1216 07:38:39.716724 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:38:39.716747 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:38:39.716864 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:38:39.748788 1798136 cri.go:89] found id: ""
	I1216 07:38:39.748862 1798136 logs.go:282] 0 containers: []
	W1216 07:38:39.748884 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:38:39.748907 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:38:39.749020 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:38:39.790859 1798136 cri.go:89] found id: ""
	I1216 07:38:39.790930 1798136 logs.go:282] 0 containers: []
	W1216 07:38:39.790952 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:38:39.790973 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:38:39.791059 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:38:39.831348 1798136 cri.go:89] found id: ""
	I1216 07:38:39.831424 1798136 logs.go:282] 0 containers: []
	W1216 07:38:39.831448 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:38:39.831471 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:38:39.831584 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:38:39.876237 1798136 cri.go:89] found id: ""
	I1216 07:38:39.876319 1798136 logs.go:282] 0 containers: []
	W1216 07:38:39.876352 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:38:39.876399 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:38:39.876436 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:38:39.915236 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:38:39.915333 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:38:39.997185 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:38:39.997279 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:38:40.022740 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:38:40.022831 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:38:40.111427 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:38:40.111504 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:38:40.111536 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:38:42.646981 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:42.659248 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:38:42.659315 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:38:42.690745 1798136 cri.go:89] found id: ""
	I1216 07:38:42.690766 1798136 logs.go:282] 0 containers: []
	W1216 07:38:42.690775 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:38:42.690781 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:38:42.690841 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:38:42.721196 1798136 cri.go:89] found id: ""
	I1216 07:38:42.721222 1798136 logs.go:282] 0 containers: []
	W1216 07:38:42.721230 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:38:42.721237 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:38:42.721300 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:38:42.750566 1798136 cri.go:89] found id: ""
	I1216 07:38:42.750593 1798136 logs.go:282] 0 containers: []
	W1216 07:38:42.750602 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:38:42.750610 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:38:42.750667 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:38:42.775746 1798136 cri.go:89] found id: ""
	I1216 07:38:42.775772 1798136 logs.go:282] 0 containers: []
	W1216 07:38:42.775784 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:38:42.775790 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:38:42.775848 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:38:42.810677 1798136 cri.go:89] found id: ""
	I1216 07:38:42.810702 1798136 logs.go:282] 0 containers: []
	W1216 07:38:42.810732 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:38:42.810739 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:38:42.810806 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:38:42.840056 1798136 cri.go:89] found id: ""
	I1216 07:38:42.840082 1798136 logs.go:282] 0 containers: []
	W1216 07:38:42.840091 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:38:42.840097 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:38:42.840158 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:38:42.873129 1798136 cri.go:89] found id: ""
	I1216 07:38:42.873172 1798136 logs.go:282] 0 containers: []
	W1216 07:38:42.873181 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:38:42.873188 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:38:42.873255 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:38:42.915141 1798136 cri.go:89] found id: ""
	I1216 07:38:42.915175 1798136 logs.go:282] 0 containers: []
	W1216 07:38:42.915184 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:38:42.915194 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:38:42.915212 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:38:43.007667 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:38:43.007753 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:38:43.032567 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:38:43.032597 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:38:43.143730 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:38:43.143752 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:38:43.143765 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:38:43.188214 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:38:43.188315 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:38:45.740938 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:45.760096 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:38:45.760170 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:38:45.814217 1798136 cri.go:89] found id: ""
	I1216 07:38:45.814246 1798136 logs.go:282] 0 containers: []
	W1216 07:38:45.814256 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:38:45.814262 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:38:45.814328 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:38:45.868049 1798136 cri.go:89] found id: ""
	I1216 07:38:45.868071 1798136 logs.go:282] 0 containers: []
	W1216 07:38:45.868079 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:38:45.868085 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:38:45.868146 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:38:45.925786 1798136 cri.go:89] found id: ""
	I1216 07:38:45.925883 1798136 logs.go:282] 0 containers: []
	W1216 07:38:45.925907 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:38:45.925948 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:38:45.926052 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:38:45.982762 1798136 cri.go:89] found id: ""
	I1216 07:38:45.982786 1798136 logs.go:282] 0 containers: []
	W1216 07:38:45.982795 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:38:45.982801 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:38:45.982859 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:38:46.029996 1798136 cri.go:89] found id: ""
	I1216 07:38:46.030018 1798136 logs.go:282] 0 containers: []
	W1216 07:38:46.030026 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:38:46.030033 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:38:46.030107 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:38:46.075737 1798136 cri.go:89] found id: ""
	I1216 07:38:46.075759 1798136 logs.go:282] 0 containers: []
	W1216 07:38:46.075768 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:38:46.075774 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:38:46.075830 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:38:46.142736 1798136 cri.go:89] found id: ""
	I1216 07:38:46.142758 1798136 logs.go:282] 0 containers: []
	W1216 07:38:46.142767 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:38:46.142777 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:38:46.142836 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:38:46.181285 1798136 cri.go:89] found id: ""
	I1216 07:38:46.181361 1798136 logs.go:282] 0 containers: []
	W1216 07:38:46.181384 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:38:46.181407 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:38:46.181459 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:38:46.274595 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:38:46.274685 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:38:46.310810 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:38:46.310896 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:38:46.429878 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:38:46.430060 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:38:46.430092 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:38:46.472758 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:38:46.472840 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:38:49.032646 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:49.043567 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:38:49.043684 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:38:49.082615 1798136 cri.go:89] found id: ""
	I1216 07:38:49.082695 1798136 logs.go:282] 0 containers: []
	W1216 07:38:49.082718 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:38:49.082738 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:38:49.082848 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:38:49.112507 1798136 cri.go:89] found id: ""
	I1216 07:38:49.112580 1798136 logs.go:282] 0 containers: []
	W1216 07:38:49.112602 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:38:49.112624 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:38:49.112710 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:38:49.147777 1798136 cri.go:89] found id: ""
	I1216 07:38:49.147853 1798136 logs.go:282] 0 containers: []
	W1216 07:38:49.147880 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:38:49.147922 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:38:49.148019 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:38:49.197190 1798136 cri.go:89] found id: ""
	I1216 07:38:49.197265 1798136 logs.go:282] 0 containers: []
	W1216 07:38:49.197287 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:38:49.197312 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:38:49.197422 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:38:49.238880 1798136 cri.go:89] found id: ""
	I1216 07:38:49.238957 1798136 logs.go:282] 0 containers: []
	W1216 07:38:49.238995 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:38:49.239018 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:38:49.239115 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:38:49.268443 1798136 cri.go:89] found id: ""
	I1216 07:38:49.268561 1798136 logs.go:282] 0 containers: []
	W1216 07:38:49.268579 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:38:49.268587 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:38:49.268668 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:38:49.297847 1798136 cri.go:89] found id: ""
	I1216 07:38:49.297872 1798136 logs.go:282] 0 containers: []
	W1216 07:38:49.297881 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:38:49.297887 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:38:49.297948 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:38:49.327291 1798136 cri.go:89] found id: ""
	I1216 07:38:49.327366 1798136 logs.go:282] 0 containers: []
	W1216 07:38:49.327405 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:38:49.327433 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:38:49.327479 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:38:49.413274 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:38:49.413318 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:38:49.448004 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:38:49.448036 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:38:49.563589 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:38:49.563613 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:38:49.563626 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:38:49.599380 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:38:49.599418 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:38:52.158561 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:52.168988 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:38:52.169067 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:38:52.206448 1798136 cri.go:89] found id: ""
	I1216 07:38:52.206474 1798136 logs.go:282] 0 containers: []
	W1216 07:38:52.206483 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:38:52.206489 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:38:52.206547 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:38:52.238402 1798136 cri.go:89] found id: ""
	I1216 07:38:52.238424 1798136 logs.go:282] 0 containers: []
	W1216 07:38:52.238435 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:38:52.238441 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:38:52.238504 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:38:52.265191 1798136 cri.go:89] found id: ""
	I1216 07:38:52.265217 1798136 logs.go:282] 0 containers: []
	W1216 07:38:52.265227 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:38:52.265234 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:38:52.265297 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:38:52.293075 1798136 cri.go:89] found id: ""
	I1216 07:38:52.293151 1798136 logs.go:282] 0 containers: []
	W1216 07:38:52.293186 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:38:52.293211 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:38:52.293287 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:38:52.318473 1798136 cri.go:89] found id: ""
	I1216 07:38:52.318497 1798136 logs.go:282] 0 containers: []
	W1216 07:38:52.318507 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:38:52.318513 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:38:52.318572 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:38:52.349661 1798136 cri.go:89] found id: ""
	I1216 07:38:52.349689 1798136 logs.go:282] 0 containers: []
	W1216 07:38:52.349698 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:38:52.349704 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:38:52.349784 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:38:52.386261 1798136 cri.go:89] found id: ""
	I1216 07:38:52.386299 1798136 logs.go:282] 0 containers: []
	W1216 07:38:52.386308 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:38:52.386314 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:38:52.386375 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:38:52.438029 1798136 cri.go:89] found id: ""
	I1216 07:38:52.438053 1798136 logs.go:282] 0 containers: []
	W1216 07:38:52.438062 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:38:52.438072 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:38:52.438084 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:38:52.549234 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:38:52.549269 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:38:52.570898 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:38:52.570923 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:38:52.695274 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:38:52.695296 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:38:52.695308 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:38:52.759764 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:38:52.759805 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:38:55.308524 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:55.318789 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:38:55.318863 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:38:55.344094 1798136 cri.go:89] found id: ""
	I1216 07:38:55.344118 1798136 logs.go:282] 0 containers: []
	W1216 07:38:55.344127 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:38:55.344133 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:38:55.344194 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:38:55.370770 1798136 cri.go:89] found id: ""
	I1216 07:38:55.370794 1798136 logs.go:282] 0 containers: []
	W1216 07:38:55.370803 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:38:55.370809 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:38:55.370871 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:38:55.400359 1798136 cri.go:89] found id: ""
	I1216 07:38:55.400385 1798136 logs.go:282] 0 containers: []
	W1216 07:38:55.400394 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:38:55.400424 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:38:55.400511 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:38:55.427434 1798136 cri.go:89] found id: ""
	I1216 07:38:55.427457 1798136 logs.go:282] 0 containers: []
	W1216 07:38:55.427466 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:38:55.427472 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:38:55.427530 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:38:55.453078 1798136 cri.go:89] found id: ""
	I1216 07:38:55.453103 1798136 logs.go:282] 0 containers: []
	W1216 07:38:55.453113 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:38:55.453119 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:38:55.453182 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:38:55.479569 1798136 cri.go:89] found id: ""
	I1216 07:38:55.479595 1798136 logs.go:282] 0 containers: []
	W1216 07:38:55.479603 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:38:55.479610 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:38:55.479670 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:38:55.505593 1798136 cri.go:89] found id: ""
	I1216 07:38:55.505619 1798136 logs.go:282] 0 containers: []
	W1216 07:38:55.505628 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:38:55.505634 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:38:55.505691 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:38:55.535323 1798136 cri.go:89] found id: ""
	I1216 07:38:55.535350 1798136 logs.go:282] 0 containers: []
	W1216 07:38:55.535360 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:38:55.535369 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:38:55.535381 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:38:55.604962 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:38:55.605000 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:38:55.621571 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:38:55.621602 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:38:55.719133 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:38:55.719194 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:38:55.719222 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:38:55.756279 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:38:55.756317 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:38:58.288584 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:38:58.300613 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:38:58.300683 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:38:58.328598 1798136 cri.go:89] found id: ""
	I1216 07:38:58.328621 1798136 logs.go:282] 0 containers: []
	W1216 07:38:58.328630 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:38:58.328636 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:38:58.328699 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:38:58.361637 1798136 cri.go:89] found id: ""
	I1216 07:38:58.361659 1798136 logs.go:282] 0 containers: []
	W1216 07:38:58.361668 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:38:58.361674 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:38:58.361738 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:38:58.390925 1798136 cri.go:89] found id: ""
	I1216 07:38:58.390947 1798136 logs.go:282] 0 containers: []
	W1216 07:38:58.390956 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:38:58.390962 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:38:58.391023 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:38:58.425897 1798136 cri.go:89] found id: ""
	I1216 07:38:58.425978 1798136 logs.go:282] 0 containers: []
	W1216 07:38:58.426002 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:38:58.426025 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:38:58.426105 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:38:58.465379 1798136 cri.go:89] found id: ""
	I1216 07:38:58.465401 1798136 logs.go:282] 0 containers: []
	W1216 07:38:58.465410 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:38:58.465417 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:38:58.465472 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:38:58.499548 1798136 cri.go:89] found id: ""
	I1216 07:38:58.499577 1798136 logs.go:282] 0 containers: []
	W1216 07:38:58.499586 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:38:58.499593 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:38:58.499655 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:38:58.532623 1798136 cri.go:89] found id: ""
	I1216 07:38:58.532653 1798136 logs.go:282] 0 containers: []
	W1216 07:38:58.532663 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:38:58.532668 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:38:58.532727 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:38:58.561727 1798136 cri.go:89] found id: ""
	I1216 07:38:58.561758 1798136 logs.go:282] 0 containers: []
	W1216 07:38:58.561767 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:38:58.561779 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:38:58.561792 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:38:58.647650 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:38:58.647726 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:38:58.665066 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:38:58.665184 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:38:58.779751 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:38:58.779767 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:38:58.779779 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:38:58.814903 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:38:58.814940 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:39:01.344083 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:39:01.357056 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:39:01.357128 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:39:01.394520 1798136 cri.go:89] found id: ""
	I1216 07:39:01.394543 1798136 logs.go:282] 0 containers: []
	W1216 07:39:01.394552 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:39:01.394557 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:39:01.394614 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:39:01.433290 1798136 cri.go:89] found id: ""
	I1216 07:39:01.433311 1798136 logs.go:282] 0 containers: []
	W1216 07:39:01.433320 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:39:01.433326 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:39:01.433386 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:39:01.470985 1798136 cri.go:89] found id: ""
	I1216 07:39:01.471006 1798136 logs.go:282] 0 containers: []
	W1216 07:39:01.471015 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:39:01.471020 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:39:01.471079 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:39:01.500084 1798136 cri.go:89] found id: ""
	I1216 07:39:01.500123 1798136 logs.go:282] 0 containers: []
	W1216 07:39:01.500132 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:39:01.500138 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:39:01.500211 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:39:01.537460 1798136 cri.go:89] found id: ""
	I1216 07:39:01.537490 1798136 logs.go:282] 0 containers: []
	W1216 07:39:01.537500 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:39:01.537507 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:39:01.537569 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:39:01.580802 1798136 cri.go:89] found id: ""
	I1216 07:39:01.580832 1798136 logs.go:282] 0 containers: []
	W1216 07:39:01.580841 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:39:01.580848 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:39:01.580912 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:39:01.625841 1798136 cri.go:89] found id: ""
	I1216 07:39:01.625868 1798136 logs.go:282] 0 containers: []
	W1216 07:39:01.625877 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:39:01.625884 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:39:01.625942 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:39:01.676041 1798136 cri.go:89] found id: ""
	I1216 07:39:01.676064 1798136 logs.go:282] 0 containers: []
	W1216 07:39:01.676072 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:39:01.676082 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:39:01.676094 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:39:01.788790 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:39:01.788832 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:39:01.819234 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:39:01.819268 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:39:01.997756 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:39:01.997773 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:39:01.997784 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:39:02.047544 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:39:02.047623 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:39:04.616845 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:39:04.628689 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:39:04.628755 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:39:04.692508 1798136 cri.go:89] found id: ""
	I1216 07:39:04.692530 1798136 logs.go:282] 0 containers: []
	W1216 07:39:04.692538 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:39:04.692544 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:39:04.692604 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:39:04.735821 1798136 cri.go:89] found id: ""
	I1216 07:39:04.735844 1798136 logs.go:282] 0 containers: []
	W1216 07:39:04.735852 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:39:04.735857 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:39:04.735914 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:39:04.790393 1798136 cri.go:89] found id: ""
	I1216 07:39:04.790415 1798136 logs.go:282] 0 containers: []
	W1216 07:39:04.790424 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:39:04.790431 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:39:04.790489 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:39:04.838921 1798136 cri.go:89] found id: ""
	I1216 07:39:04.838943 1798136 logs.go:282] 0 containers: []
	W1216 07:39:04.838951 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:39:04.838957 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:39:04.839017 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:39:04.900202 1798136 cri.go:89] found id: ""
	I1216 07:39:04.900225 1798136 logs.go:282] 0 containers: []
	W1216 07:39:04.900233 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:39:04.900240 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:39:04.900299 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:39:04.938369 1798136 cri.go:89] found id: ""
	I1216 07:39:04.938391 1798136 logs.go:282] 0 containers: []
	W1216 07:39:04.938399 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:39:04.938405 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:39:04.938462 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:39:04.990294 1798136 cri.go:89] found id: ""
	I1216 07:39:04.990372 1798136 logs.go:282] 0 containers: []
	W1216 07:39:04.990384 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:39:04.990393 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:39:04.990484 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:39:05.054229 1798136 cri.go:89] found id: ""
	I1216 07:39:05.054251 1798136 logs.go:282] 0 containers: []
	W1216 07:39:05.054260 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:39:05.054269 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:39:05.054282 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:39:05.153556 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:39:05.153673 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:39:05.188612 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:39:05.188688 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:39:05.343817 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:39:05.343884 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:39:05.343913 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:39:05.393340 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:39:05.393421 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:39:07.944589 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:39:07.963097 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:39:07.963170 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:39:07.991146 1798136 cri.go:89] found id: ""
	I1216 07:39:07.991170 1798136 logs.go:282] 0 containers: []
	W1216 07:39:07.991179 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:39:07.991186 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:39:07.991247 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:39:08.032778 1798136 cri.go:89] found id: ""
	I1216 07:39:08.032804 1798136 logs.go:282] 0 containers: []
	W1216 07:39:08.032814 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:39:08.032821 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:39:08.032887 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:39:08.066596 1798136 cri.go:89] found id: ""
	I1216 07:39:08.066622 1798136 logs.go:282] 0 containers: []
	W1216 07:39:08.066631 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:39:08.066637 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:39:08.066701 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:39:08.094378 1798136 cri.go:89] found id: ""
	I1216 07:39:08.094453 1798136 logs.go:282] 0 containers: []
	W1216 07:39:08.094476 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:39:08.094498 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:39:08.094587 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:39:08.124503 1798136 cri.go:89] found id: ""
	I1216 07:39:08.124529 1798136 logs.go:282] 0 containers: []
	W1216 07:39:08.124538 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:39:08.124544 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:39:08.124602 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:39:08.149624 1798136 cri.go:89] found id: ""
	I1216 07:39:08.149691 1798136 logs.go:282] 0 containers: []
	W1216 07:39:08.149714 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:39:08.149736 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:39:08.149815 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:39:08.175751 1798136 cri.go:89] found id: ""
	I1216 07:39:08.175818 1798136 logs.go:282] 0 containers: []
	W1216 07:39:08.175841 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:39:08.175886 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:39:08.175971 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:39:08.201395 1798136 cri.go:89] found id: ""
	I1216 07:39:08.201423 1798136 logs.go:282] 0 containers: []
	W1216 07:39:08.201432 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:39:08.201460 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:39:08.201476 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:39:08.270995 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:39:08.271062 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:39:08.271090 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:39:08.302409 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:39:08.302445 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:39:08.332871 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:39:08.332912 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:39:08.420751 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:39:08.420783 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:39:10.952284 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:39:10.962878 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:39:10.962948 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:39:10.990034 1798136 cri.go:89] found id: ""
	I1216 07:39:10.990062 1798136 logs.go:282] 0 containers: []
	W1216 07:39:10.990070 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:39:10.990077 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:39:10.990138 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:39:11.017518 1798136 cri.go:89] found id: ""
	I1216 07:39:11.017546 1798136 logs.go:282] 0 containers: []
	W1216 07:39:11.017556 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:39:11.017562 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:39:11.017626 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:39:11.044113 1798136 cri.go:89] found id: ""
	I1216 07:39:11.044140 1798136 logs.go:282] 0 containers: []
	W1216 07:39:11.044149 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:39:11.044157 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:39:11.044215 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:39:11.070579 1798136 cri.go:89] found id: ""
	I1216 07:39:11.070605 1798136 logs.go:282] 0 containers: []
	W1216 07:39:11.070614 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:39:11.070621 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:39:11.070685 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:39:11.097099 1798136 cri.go:89] found id: ""
	I1216 07:39:11.097128 1798136 logs.go:282] 0 containers: []
	W1216 07:39:11.097138 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:39:11.097144 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:39:11.097207 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:39:11.124551 1798136 cri.go:89] found id: ""
	I1216 07:39:11.124581 1798136 logs.go:282] 0 containers: []
	W1216 07:39:11.124592 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:39:11.124599 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:39:11.124661 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:39:11.153700 1798136 cri.go:89] found id: ""
	I1216 07:39:11.153786 1798136 logs.go:282] 0 containers: []
	W1216 07:39:11.153802 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:39:11.153809 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:39:11.153875 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:39:11.182521 1798136 cri.go:89] found id: ""
	I1216 07:39:11.182592 1798136 logs.go:282] 0 containers: []
	W1216 07:39:11.182616 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:39:11.182641 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:39:11.182675 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:39:11.215173 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:39:11.215203 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:39:11.284160 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:39:11.284199 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:39:11.302371 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:39:11.302404 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:39:11.371313 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:39:11.371387 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:39:11.371407 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:39:13.903305 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:39:13.913008 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:39:13.913081 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:39:13.942140 1798136 cri.go:89] found id: ""
	I1216 07:39:13.942166 1798136 logs.go:282] 0 containers: []
	W1216 07:39:13.942175 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:39:13.942182 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:39:13.942241 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:39:13.968053 1798136 cri.go:89] found id: ""
	I1216 07:39:13.968080 1798136 logs.go:282] 0 containers: []
	W1216 07:39:13.968089 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:39:13.968095 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:39:13.968159 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:39:13.993876 1798136 cri.go:89] found id: ""
	I1216 07:39:13.993903 1798136 logs.go:282] 0 containers: []
	W1216 07:39:13.993913 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:39:13.993922 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:39:13.993982 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:39:14.021647 1798136 cri.go:89] found id: ""
	I1216 07:39:14.021681 1798136 logs.go:282] 0 containers: []
	W1216 07:39:14.021691 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:39:14.021698 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:39:14.021762 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:39:14.048565 1798136 cri.go:89] found id: ""
	I1216 07:39:14.048589 1798136 logs.go:282] 0 containers: []
	W1216 07:39:14.048598 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:39:14.048604 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:39:14.048662 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:39:14.074940 1798136 cri.go:89] found id: ""
	I1216 07:39:14.075013 1798136 logs.go:282] 0 containers: []
	W1216 07:39:14.075034 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:39:14.075055 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:39:14.075147 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:39:14.102259 1798136 cri.go:89] found id: ""
	I1216 07:39:14.102282 1798136 logs.go:282] 0 containers: []
	W1216 07:39:14.102291 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:39:14.102314 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:39:14.102374 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:39:14.131599 1798136 cri.go:89] found id: ""
	I1216 07:39:14.131637 1798136 logs.go:282] 0 containers: []
	W1216 07:39:14.131647 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:39:14.131656 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:39:14.131667 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:39:14.203085 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:39:14.203125 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:39:14.220665 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:39:14.220695 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:39:14.287884 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:39:14.287907 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:39:14.287921 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:39:14.318074 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:39:14.318108 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:39:16.846063 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:39:16.856265 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:39:16.856334 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:39:16.883792 1798136 cri.go:89] found id: ""
	I1216 07:39:16.883815 1798136 logs.go:282] 0 containers: []
	W1216 07:39:16.883823 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:39:16.883829 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:39:16.883889 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:39:16.912256 1798136 cri.go:89] found id: ""
	I1216 07:39:16.912282 1798136 logs.go:282] 0 containers: []
	W1216 07:39:16.912291 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:39:16.912297 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:39:16.912358 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:39:16.942259 1798136 cri.go:89] found id: ""
	I1216 07:39:16.942282 1798136 logs.go:282] 0 containers: []
	W1216 07:39:16.942291 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:39:16.942297 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:39:16.942360 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:39:16.967698 1798136 cri.go:89] found id: ""
	I1216 07:39:16.967719 1798136 logs.go:282] 0 containers: []
	W1216 07:39:16.967727 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:39:16.967734 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:39:16.967795 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:39:16.994519 1798136 cri.go:89] found id: ""
	I1216 07:39:16.994542 1798136 logs.go:282] 0 containers: []
	W1216 07:39:16.994552 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:39:16.994559 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:39:16.994620 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:39:17.024943 1798136 cri.go:89] found id: ""
	I1216 07:39:17.025016 1798136 logs.go:282] 0 containers: []
	W1216 07:39:17.025042 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:39:17.025057 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:39:17.025124 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:39:17.051454 1798136 cri.go:89] found id: ""
	I1216 07:39:17.051482 1798136 logs.go:282] 0 containers: []
	W1216 07:39:17.051491 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:39:17.051497 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:39:17.051554 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:39:17.080878 1798136 cri.go:89] found id: ""
	I1216 07:39:17.080908 1798136 logs.go:282] 0 containers: []
	W1216 07:39:17.080918 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:39:17.080928 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:39:17.080940 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:39:17.149178 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:39:17.149217 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:39:17.165889 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:39:17.165919 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:39:17.233393 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:39:17.233418 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:39:17.233431 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:39:17.264365 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:39:17.264400 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:39:19.794157 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:39:19.804658 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:39:19.804742 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:39:19.833913 1798136 cri.go:89] found id: ""
	I1216 07:39:19.833938 1798136 logs.go:282] 0 containers: []
	W1216 07:39:19.833947 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:39:19.833954 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:39:19.834016 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:39:19.860003 1798136 cri.go:89] found id: ""
	I1216 07:39:19.860030 1798136 logs.go:282] 0 containers: []
	W1216 07:39:19.860040 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:39:19.860046 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:39:19.860103 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:39:19.890099 1798136 cri.go:89] found id: ""
	I1216 07:39:19.890127 1798136 logs.go:282] 0 containers: []
	W1216 07:39:19.890135 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:39:19.890142 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:39:19.890200 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:39:19.917841 1798136 cri.go:89] found id: ""
	I1216 07:39:19.917866 1798136 logs.go:282] 0 containers: []
	W1216 07:39:19.917874 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:39:19.917886 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:39:19.917947 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:39:19.948486 1798136 cri.go:89] found id: ""
	I1216 07:39:19.948565 1798136 logs.go:282] 0 containers: []
	W1216 07:39:19.948589 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:39:19.948609 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:39:19.948700 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:39:19.980614 1798136 cri.go:89] found id: ""
	I1216 07:39:19.980638 1798136 logs.go:282] 0 containers: []
	W1216 07:39:19.980647 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:39:19.980653 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:39:19.980712 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:39:20.020269 1798136 cri.go:89] found id: ""
	I1216 07:39:20.020296 1798136 logs.go:282] 0 containers: []
	W1216 07:39:20.020306 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:39:20.020312 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:39:20.020379 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:39:20.048743 1798136 cri.go:89] found id: ""
	I1216 07:39:20.048773 1798136 logs.go:282] 0 containers: []
	W1216 07:39:20.048782 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:39:20.048791 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:39:20.048823 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:39:20.116916 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:39:20.116953 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:39:20.134178 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:39:20.134264 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:39:20.207513 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:39:20.207538 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:39:20.207556 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:39:20.239233 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:39:20.239268 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:39:22.773645 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:39:22.783778 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:39:22.783844 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:39:22.813539 1798136 cri.go:89] found id: ""
	I1216 07:39:22.813565 1798136 logs.go:282] 0 containers: []
	W1216 07:39:22.813574 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:39:22.813581 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:39:22.813641 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:39:22.839233 1798136 cri.go:89] found id: ""
	I1216 07:39:22.839258 1798136 logs.go:282] 0 containers: []
	W1216 07:39:22.839267 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:39:22.839273 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:39:22.839343 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:39:22.869660 1798136 cri.go:89] found id: ""
	I1216 07:39:22.869687 1798136 logs.go:282] 0 containers: []
	W1216 07:39:22.869696 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:39:22.869702 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:39:22.869761 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:39:22.894482 1798136 cri.go:89] found id: ""
	I1216 07:39:22.894505 1798136 logs.go:282] 0 containers: []
	W1216 07:39:22.894514 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:39:22.894520 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:39:22.894580 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:39:22.921275 1798136 cri.go:89] found id: ""
	I1216 07:39:22.921302 1798136 logs.go:282] 0 containers: []
	W1216 07:39:22.921311 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:39:22.921318 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:39:22.921378 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:39:22.946742 1798136 cri.go:89] found id: ""
	I1216 07:39:22.946766 1798136 logs.go:282] 0 containers: []
	W1216 07:39:22.946775 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:39:22.946781 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:39:22.946841 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:39:22.972531 1798136 cri.go:89] found id: ""
	I1216 07:39:22.972611 1798136 logs.go:282] 0 containers: []
	W1216 07:39:22.972637 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:39:22.972651 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:39:22.972732 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:39:23.000963 1798136 cri.go:89] found id: ""
	I1216 07:39:23.000989 1798136 logs.go:282] 0 containers: []
	W1216 07:39:23.000999 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:39:23.001009 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:39:23.001023 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:39:23.070826 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:39:23.070848 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:39:23.070861 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:39:23.101292 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:39:23.101330 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:39:23.138105 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:39:23.138134 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:39:23.210735 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:39:23.210777 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:39:25.728609 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:39:25.740071 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:39:25.740155 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:39:25.774843 1798136 cri.go:89] found id: ""
	I1216 07:39:25.774885 1798136 logs.go:282] 0 containers: []
	W1216 07:39:25.774894 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:39:25.774901 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:39:25.774974 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:39:25.803439 1798136 cri.go:89] found id: ""
	I1216 07:39:25.803477 1798136 logs.go:282] 0 containers: []
	W1216 07:39:25.803486 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:39:25.803491 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:39:25.803558 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:39:25.839190 1798136 cri.go:89] found id: ""
	I1216 07:39:25.839232 1798136 logs.go:282] 0 containers: []
	W1216 07:39:25.839242 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:39:25.839247 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:39:25.839320 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:39:25.879676 1798136 cri.go:89] found id: ""
	I1216 07:39:25.879704 1798136 logs.go:282] 0 containers: []
	W1216 07:39:25.879714 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:39:25.879720 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:39:25.879791 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:39:25.906699 1798136 cri.go:89] found id: ""
	I1216 07:39:25.906738 1798136 logs.go:282] 0 containers: []
	W1216 07:39:25.906748 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:39:25.906761 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:39:25.906837 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:39:25.937870 1798136 cri.go:89] found id: ""
	I1216 07:39:25.937938 1798136 logs.go:282] 0 containers: []
	W1216 07:39:25.937972 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:39:25.937992 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:39:25.938086 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:39:25.970484 1798136 cri.go:89] found id: ""
	I1216 07:39:25.970558 1798136 logs.go:282] 0 containers: []
	W1216 07:39:25.970582 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:39:25.970603 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:39:25.970701 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:39:25.999817 1798136 cri.go:89] found id: ""
	I1216 07:39:25.999884 1798136 logs.go:282] 0 containers: []
	W1216 07:39:25.999917 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:39:25.999940 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:39:25.999979 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:39:26.087344 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:39:26.087417 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:39:26.087445 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:39:26.121504 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:39:26.121537 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:39:26.168338 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:39:26.168407 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:39:26.242556 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:39:26.242596 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:39:28.760650 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:39:28.770908 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:39:28.770987 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:39:28.797266 1798136 cri.go:89] found id: ""
	I1216 07:39:28.797290 1798136 logs.go:282] 0 containers: []
	W1216 07:39:28.797299 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:39:28.797306 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:39:28.797375 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:39:28.823043 1798136 cri.go:89] found id: ""
	I1216 07:39:28.823069 1798136 logs.go:282] 0 containers: []
	W1216 07:39:28.823079 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:39:28.823085 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:39:28.823148 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:39:28.850791 1798136 cri.go:89] found id: ""
	I1216 07:39:28.850818 1798136 logs.go:282] 0 containers: []
	W1216 07:39:28.850828 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:39:28.850834 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:39:28.850903 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:39:28.876889 1798136 cri.go:89] found id: ""
	I1216 07:39:28.876972 1798136 logs.go:282] 0 containers: []
	W1216 07:39:28.876989 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:39:28.876997 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:39:28.877075 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:39:28.904038 1798136 cri.go:89] found id: ""
	I1216 07:39:28.904066 1798136 logs.go:282] 0 containers: []
	W1216 07:39:28.904074 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:39:28.904080 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:39:28.904141 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:39:28.929288 1798136 cri.go:89] found id: ""
	I1216 07:39:28.929310 1798136 logs.go:282] 0 containers: []
	W1216 07:39:28.929318 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:39:28.929324 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:39:28.929389 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:39:28.953974 1798136 cri.go:89] found id: ""
	I1216 07:39:28.953998 1798136 logs.go:282] 0 containers: []
	W1216 07:39:28.954007 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:39:28.954013 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:39:28.954073 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:39:28.983639 1798136 cri.go:89] found id: ""
	I1216 07:39:28.983665 1798136 logs.go:282] 0 containers: []
	W1216 07:39:28.983674 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:39:28.983683 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:39:28.983694 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:39:29.014848 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:39:29.014885 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:39:29.045274 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:39:29.045309 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:39:29.120310 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:39:29.120346 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:39:29.137025 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:39:29.137056 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:39:29.199568 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:39:31.701037 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:39:31.711625 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:39:31.711704 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:39:31.739485 1798136 cri.go:89] found id: ""
	I1216 07:39:31.739510 1798136 logs.go:282] 0 containers: []
	W1216 07:39:31.739520 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:39:31.739526 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:39:31.739586 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:39:31.769081 1798136 cri.go:89] found id: ""
	I1216 07:39:31.769105 1798136 logs.go:282] 0 containers: []
	W1216 07:39:31.769114 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:39:31.769120 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:39:31.769182 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:39:31.794968 1798136 cri.go:89] found id: ""
	I1216 07:39:31.794992 1798136 logs.go:282] 0 containers: []
	W1216 07:39:31.795020 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:39:31.795027 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:39:31.795096 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:39:31.821453 1798136 cri.go:89] found id: ""
	I1216 07:39:31.821479 1798136 logs.go:282] 0 containers: []
	W1216 07:39:31.821488 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:39:31.821494 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:39:31.821562 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:39:31.846835 1798136 cri.go:89] found id: ""
	I1216 07:39:31.846915 1798136 logs.go:282] 0 containers: []
	W1216 07:39:31.846937 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:39:31.846959 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:39:31.847074 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:39:31.876661 1798136 cri.go:89] found id: ""
	I1216 07:39:31.876742 1798136 logs.go:282] 0 containers: []
	W1216 07:39:31.876779 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:39:31.876803 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:39:31.876895 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:39:31.907914 1798136 cri.go:89] found id: ""
	I1216 07:39:31.907989 1798136 logs.go:282] 0 containers: []
	W1216 07:39:31.908012 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:39:31.908033 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:39:31.908152 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:39:31.934826 1798136 cri.go:89] found id: ""
	I1216 07:39:31.934894 1798136 logs.go:282] 0 containers: []
	W1216 07:39:31.934917 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:39:31.934940 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:39:31.934979 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:39:32.003218 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:39:32.003268 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:39:32.023141 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:39:32.023174 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:39:32.092137 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:39:32.092216 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:39:32.092246 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:39:32.123526 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:39:32.123564 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:39:34.653791 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:39:34.665416 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:39:34.665490 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:39:34.694808 1798136 cri.go:89] found id: ""
	I1216 07:39:34.694834 1798136 logs.go:282] 0 containers: []
	W1216 07:39:34.694843 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:39:34.694850 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:39:34.694914 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:39:34.729306 1798136 cri.go:89] found id: ""
	I1216 07:39:34.729331 1798136 logs.go:282] 0 containers: []
	W1216 07:39:34.729341 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:39:34.729348 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:39:34.729404 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:39:34.755850 1798136 cri.go:89] found id: ""
	I1216 07:39:34.755877 1798136 logs.go:282] 0 containers: []
	W1216 07:39:34.755886 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:39:34.755893 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:39:34.755957 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:39:34.781203 1798136 cri.go:89] found id: ""
	I1216 07:39:34.781228 1798136 logs.go:282] 0 containers: []
	W1216 07:39:34.781237 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:39:34.781243 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:39:34.781301 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:39:34.811574 1798136 cri.go:89] found id: ""
	I1216 07:39:34.811598 1798136 logs.go:282] 0 containers: []
	W1216 07:39:34.811607 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:39:34.811614 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:39:34.811678 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:39:34.842312 1798136 cri.go:89] found id: ""
	I1216 07:39:34.842337 1798136 logs.go:282] 0 containers: []
	W1216 07:39:34.842346 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:39:34.842353 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:39:34.842441 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:39:34.868081 1798136 cri.go:89] found id: ""
	I1216 07:39:34.868107 1798136 logs.go:282] 0 containers: []
	W1216 07:39:34.868117 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:39:34.868123 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:39:34.868217 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:39:34.893443 1798136 cri.go:89] found id: ""
	I1216 07:39:34.893469 1798136 logs.go:282] 0 containers: []
	W1216 07:39:34.893477 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:39:34.893485 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:39:34.893498 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:39:34.960940 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:39:34.960959 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:39:34.960978 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:39:34.992412 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:39:34.992455 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:39:35.026051 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:39:35.026080 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:39:35.099357 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:39:35.099404 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:39:37.617312 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:39:37.627264 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:39:37.627336 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:39:37.660537 1798136 cri.go:89] found id: ""
	I1216 07:39:37.660565 1798136 logs.go:282] 0 containers: []
	W1216 07:39:37.660574 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:39:37.660580 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:39:37.660642 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:39:37.689044 1798136 cri.go:89] found id: ""
	I1216 07:39:37.689070 1798136 logs.go:282] 0 containers: []
	W1216 07:39:37.689079 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:39:37.689085 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:39:37.689148 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:39:37.721182 1798136 cri.go:89] found id: ""
	I1216 07:39:37.721210 1798136 logs.go:282] 0 containers: []
	W1216 07:39:37.721219 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:39:37.721225 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:39:37.721291 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:39:37.748635 1798136 cri.go:89] found id: ""
	I1216 07:39:37.748658 1798136 logs.go:282] 0 containers: []
	W1216 07:39:37.748667 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:39:37.748673 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:39:37.748736 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:39:37.774631 1798136 cri.go:89] found id: ""
	I1216 07:39:37.774661 1798136 logs.go:282] 0 containers: []
	W1216 07:39:37.774672 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:39:37.774679 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:39:37.774738 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:39:37.801157 1798136 cri.go:89] found id: ""
	I1216 07:39:37.801185 1798136 logs.go:282] 0 containers: []
	W1216 07:39:37.801194 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:39:37.801201 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:39:37.801262 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:39:37.825873 1798136 cri.go:89] found id: ""
	I1216 07:39:37.825897 1798136 logs.go:282] 0 containers: []
	W1216 07:39:37.825906 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:39:37.825912 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:39:37.825969 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:39:37.851033 1798136 cri.go:89] found id: ""
	I1216 07:39:37.851056 1798136 logs.go:282] 0 containers: []
	W1216 07:39:37.851065 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:39:37.851073 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:39:37.851085 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:39:37.920097 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:39:37.920137 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:39:37.936948 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:39:37.936984 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:39:38.007765 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:39:38.007795 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:39:38.007833 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:39:38.040013 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:39:38.040050 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:39:40.573500 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:39:40.584616 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:39:40.584686 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:39:40.610870 1798136 cri.go:89] found id: ""
	I1216 07:39:40.610896 1798136 logs.go:282] 0 containers: []
	W1216 07:39:40.610905 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:39:40.610912 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:39:40.610971 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:39:40.639548 1798136 cri.go:89] found id: ""
	I1216 07:39:40.639574 1798136 logs.go:282] 0 containers: []
	W1216 07:39:40.639584 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:39:40.639590 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:39:40.639655 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:39:40.666483 1798136 cri.go:89] found id: ""
	I1216 07:39:40.666511 1798136 logs.go:282] 0 containers: []
	W1216 07:39:40.666520 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:39:40.666527 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:39:40.666588 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:39:40.696732 1798136 cri.go:89] found id: ""
	I1216 07:39:40.696759 1798136 logs.go:282] 0 containers: []
	W1216 07:39:40.696768 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:39:40.696774 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:39:40.696839 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:39:40.730376 1798136 cri.go:89] found id: ""
	I1216 07:39:40.730401 1798136 logs.go:282] 0 containers: []
	W1216 07:39:40.730410 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:39:40.730416 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:39:40.730494 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:39:40.756051 1798136 cri.go:89] found id: ""
	I1216 07:39:40.756076 1798136 logs.go:282] 0 containers: []
	W1216 07:39:40.756085 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:39:40.756091 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:39:40.756178 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:39:40.782907 1798136 cri.go:89] found id: ""
	I1216 07:39:40.782935 1798136 logs.go:282] 0 containers: []
	W1216 07:39:40.782944 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:39:40.782951 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:39:40.783011 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:39:40.810756 1798136 cri.go:89] found id: ""
	I1216 07:39:40.810784 1798136 logs.go:282] 0 containers: []
	W1216 07:39:40.810794 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:39:40.810804 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:39:40.810816 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:39:40.881177 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:39:40.881212 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:39:40.898264 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:39:40.898442 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:39:40.970184 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:39:40.970246 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:39:40.970267 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:39:41.001303 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:39:41.001349 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:39:43.536377 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:39:43.547936 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:39:43.548011 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:39:43.577785 1798136 cri.go:89] found id: ""
	I1216 07:39:43.577810 1798136 logs.go:282] 0 containers: []
	W1216 07:39:43.577818 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:39:43.577827 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:39:43.577889 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:39:43.607824 1798136 cri.go:89] found id: ""
	I1216 07:39:43.607851 1798136 logs.go:282] 0 containers: []
	W1216 07:39:43.607861 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:39:43.607870 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:39:43.607929 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:39:43.633797 1798136 cri.go:89] found id: ""
	I1216 07:39:43.633824 1798136 logs.go:282] 0 containers: []
	W1216 07:39:43.633834 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:39:43.633839 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:39:43.633899 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:39:43.664211 1798136 cri.go:89] found id: ""
	I1216 07:39:43.664233 1798136 logs.go:282] 0 containers: []
	W1216 07:39:43.664242 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:39:43.664248 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:39:43.664308 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:39:43.699080 1798136 cri.go:89] found id: ""
	I1216 07:39:43.699105 1798136 logs.go:282] 0 containers: []
	W1216 07:39:43.699114 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:39:43.699120 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:39:43.699177 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:39:43.727626 1798136 cri.go:89] found id: ""
	I1216 07:39:43.727652 1798136 logs.go:282] 0 containers: []
	W1216 07:39:43.727661 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:39:43.727667 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:39:43.727726 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:39:43.752198 1798136 cri.go:89] found id: ""
	I1216 07:39:43.752269 1798136 logs.go:282] 0 containers: []
	W1216 07:39:43.752292 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:39:43.752313 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:39:43.752425 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:39:43.777703 1798136 cri.go:89] found id: ""
	I1216 07:39:43.777776 1798136 logs.go:282] 0 containers: []
	W1216 07:39:43.777799 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:39:43.777822 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:39:43.777860 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:39:43.844091 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:39:43.844130 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:39:43.860668 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:39:43.860696 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:39:43.930431 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:39:43.930456 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:39:43.930471 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:39:43.962434 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:39:43.962468 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:39:46.495414 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:39:46.509243 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:39:46.509323 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:39:46.543528 1798136 cri.go:89] found id: ""
	I1216 07:39:46.543574 1798136 logs.go:282] 0 containers: []
	W1216 07:39:46.543588 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:39:46.543600 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:39:46.543659 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:39:46.578894 1798136 cri.go:89] found id: ""
	I1216 07:39:46.578916 1798136 logs.go:282] 0 containers: []
	W1216 07:39:46.578924 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:39:46.578931 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:39:46.578992 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:39:46.614690 1798136 cri.go:89] found id: ""
	I1216 07:39:46.614711 1798136 logs.go:282] 0 containers: []
	W1216 07:39:46.614719 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:39:46.614725 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:39:46.614788 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:39:46.659148 1798136 cri.go:89] found id: ""
	I1216 07:39:46.659224 1798136 logs.go:282] 0 containers: []
	W1216 07:39:46.659249 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:39:46.659269 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:39:46.659376 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:39:46.732118 1798136 cri.go:89] found id: ""
	I1216 07:39:46.732191 1798136 logs.go:282] 0 containers: []
	W1216 07:39:46.732215 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:39:46.732237 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:39:46.732347 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:39:46.763272 1798136 cri.go:89] found id: ""
	I1216 07:39:46.763343 1798136 logs.go:282] 0 containers: []
	W1216 07:39:46.763367 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:39:46.763388 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:39:46.763478 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:39:46.798302 1798136 cri.go:89] found id: ""
	I1216 07:39:46.798373 1798136 logs.go:282] 0 containers: []
	W1216 07:39:46.798396 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:39:46.798417 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:39:46.798503 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:39:46.828710 1798136 cri.go:89] found id: ""
	I1216 07:39:46.828772 1798136 logs.go:282] 0 containers: []
	W1216 07:39:46.828815 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:39:46.828843 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:39:46.828868 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:39:46.864609 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:39:46.864641 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:39:46.909314 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:39:46.909401 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:39:46.990950 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:39:46.991034 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:39:47.008012 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:39:47.008113 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:39:47.102895 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:39:49.604688 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:39:49.615049 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:39:49.615121 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:39:49.640456 1798136 cri.go:89] found id: ""
	I1216 07:39:49.640524 1798136 logs.go:282] 0 containers: []
	W1216 07:39:49.640534 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:39:49.640540 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:39:49.640598 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:39:49.671216 1798136 cri.go:89] found id: ""
	I1216 07:39:49.671240 1798136 logs.go:282] 0 containers: []
	W1216 07:39:49.671248 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:39:49.671254 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:39:49.671313 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:39:49.704080 1798136 cri.go:89] found id: ""
	I1216 07:39:49.704103 1798136 logs.go:282] 0 containers: []
	W1216 07:39:49.704112 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:39:49.704118 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:39:49.704177 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:39:49.734703 1798136 cri.go:89] found id: ""
	I1216 07:39:49.734729 1798136 logs.go:282] 0 containers: []
	W1216 07:39:49.734739 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:39:49.734745 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:39:49.734806 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:39:49.760279 1798136 cri.go:89] found id: ""
	I1216 07:39:49.760309 1798136 logs.go:282] 0 containers: []
	W1216 07:39:49.760324 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:39:49.760330 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:39:49.760389 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:39:49.786583 1798136 cri.go:89] found id: ""
	I1216 07:39:49.786609 1798136 logs.go:282] 0 containers: []
	W1216 07:39:49.786618 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:39:49.786624 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:39:49.786691 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:39:49.812645 1798136 cri.go:89] found id: ""
	I1216 07:39:49.812718 1798136 logs.go:282] 0 containers: []
	W1216 07:39:49.812742 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:39:49.812763 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:39:49.812839 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:39:49.840344 1798136 cri.go:89] found id: ""
	I1216 07:39:49.840415 1798136 logs.go:282] 0 containers: []
	W1216 07:39:49.840439 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:39:49.840463 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:39:49.840602 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:39:49.909254 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:39:49.909291 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:39:49.925777 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:39:49.925807 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:39:49.990554 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:39:49.990576 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:39:49.990591 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:39:50.026553 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:39:50.026596 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:39:52.560340 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:39:52.570528 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:39:52.570602 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:39:52.596408 1798136 cri.go:89] found id: ""
	I1216 07:39:52.596430 1798136 logs.go:282] 0 containers: []
	W1216 07:39:52.596439 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:39:52.596444 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:39:52.596564 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:39:52.621873 1798136 cri.go:89] found id: ""
	I1216 07:39:52.621897 1798136 logs.go:282] 0 containers: []
	W1216 07:39:52.621906 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:39:52.621913 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:39:52.621969 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:39:52.649075 1798136 cri.go:89] found id: ""
	I1216 07:39:52.649102 1798136 logs.go:282] 0 containers: []
	W1216 07:39:52.649111 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:39:52.649118 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:39:52.649175 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:39:52.678910 1798136 cri.go:89] found id: ""
	I1216 07:39:52.678937 1798136 logs.go:282] 0 containers: []
	W1216 07:39:52.678946 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:39:52.678952 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:39:52.679011 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:39:52.723071 1798136 cri.go:89] found id: ""
	I1216 07:39:52.723096 1798136 logs.go:282] 0 containers: []
	W1216 07:39:52.723104 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:39:52.723110 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:39:52.723170 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:39:52.749017 1798136 cri.go:89] found id: ""
	I1216 07:39:52.749052 1798136 logs.go:282] 0 containers: []
	W1216 07:39:52.749062 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:39:52.749083 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:39:52.749146 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:39:52.773617 1798136 cri.go:89] found id: ""
	I1216 07:39:52.773640 1798136 logs.go:282] 0 containers: []
	W1216 07:39:52.773649 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:39:52.773655 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:39:52.773713 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:39:52.800646 1798136 cri.go:89] found id: ""
	I1216 07:39:52.800719 1798136 logs.go:282] 0 containers: []
	W1216 07:39:52.800743 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:39:52.800766 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:39:52.800801 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:39:52.829150 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:39:52.829228 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:39:52.897515 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:39:52.897555 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:39:52.920995 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:39:52.921027 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:39:52.986665 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:39:52.986687 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:39:52.986701 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:39:55.520677 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:39:55.533216 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:39:55.533307 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:39:55.565246 1798136 cri.go:89] found id: ""
	I1216 07:39:55.565271 1798136 logs.go:282] 0 containers: []
	W1216 07:39:55.565280 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:39:55.565286 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:39:55.565348 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:39:55.591482 1798136 cri.go:89] found id: ""
	I1216 07:39:55.591507 1798136 logs.go:282] 0 containers: []
	W1216 07:39:55.591515 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:39:55.591522 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:39:55.591588 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:39:55.616891 1798136 cri.go:89] found id: ""
	I1216 07:39:55.616915 1798136 logs.go:282] 0 containers: []
	W1216 07:39:55.616924 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:39:55.616931 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:39:55.616989 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:39:55.641894 1798136 cri.go:89] found id: ""
	I1216 07:39:55.641920 1798136 logs.go:282] 0 containers: []
	W1216 07:39:55.641930 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:39:55.641937 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:39:55.641996 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:39:55.674149 1798136 cri.go:89] found id: ""
	I1216 07:39:55.674173 1798136 logs.go:282] 0 containers: []
	W1216 07:39:55.674182 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:39:55.674187 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:39:55.674249 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:39:55.703786 1798136 cri.go:89] found id: ""
	I1216 07:39:55.703810 1798136 logs.go:282] 0 containers: []
	W1216 07:39:55.703818 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:39:55.703825 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:39:55.703884 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:39:55.732971 1798136 cri.go:89] found id: ""
	I1216 07:39:55.732995 1798136 logs.go:282] 0 containers: []
	W1216 07:39:55.733004 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:39:55.733010 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:39:55.733073 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:39:55.762501 1798136 cri.go:89] found id: ""
	I1216 07:39:55.762525 1798136 logs.go:282] 0 containers: []
	W1216 07:39:55.762533 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:39:55.762542 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:39:55.762554 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:39:55.830732 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:39:55.830769 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:39:55.847292 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:39:55.847331 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:39:55.913240 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:39:55.913258 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:39:55.913271 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:39:55.944293 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:39:55.944329 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:39:58.473584 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:39:58.485583 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:39:58.485662 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:39:58.514308 1798136 cri.go:89] found id: ""
	I1216 07:39:58.514332 1798136 logs.go:282] 0 containers: []
	W1216 07:39:58.514341 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:39:58.514348 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:39:58.514407 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:39:58.544842 1798136 cri.go:89] found id: ""
	I1216 07:39:58.544868 1798136 logs.go:282] 0 containers: []
	W1216 07:39:58.544878 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:39:58.544884 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:39:58.544947 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:39:58.570461 1798136 cri.go:89] found id: ""
	I1216 07:39:58.570487 1798136 logs.go:282] 0 containers: []
	W1216 07:39:58.570496 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:39:58.570502 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:39:58.570560 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:39:58.596185 1798136 cri.go:89] found id: ""
	I1216 07:39:58.596210 1798136 logs.go:282] 0 containers: []
	W1216 07:39:58.596219 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:39:58.596225 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:39:58.596287 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:39:58.622103 1798136 cri.go:89] found id: ""
	I1216 07:39:58.622129 1798136 logs.go:282] 0 containers: []
	W1216 07:39:58.622138 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:39:58.622145 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:39:58.622228 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:39:58.655526 1798136 cri.go:89] found id: ""
	I1216 07:39:58.655552 1798136 logs.go:282] 0 containers: []
	W1216 07:39:58.655561 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:39:58.655568 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:39:58.655651 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:39:58.686707 1798136 cri.go:89] found id: ""
	I1216 07:39:58.686734 1798136 logs.go:282] 0 containers: []
	W1216 07:39:58.686743 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:39:58.686749 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:39:58.686832 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:39:58.714088 1798136 cri.go:89] found id: ""
	I1216 07:39:58.714114 1798136 logs.go:282] 0 containers: []
	W1216 07:39:58.714124 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:39:58.714151 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:39:58.714168 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:39:58.785285 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:39:58.785322 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:39:58.801743 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:39:58.801774 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:39:58.869629 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:39:58.869650 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:39:58.869663 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:39:58.901729 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:39:58.901810 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:40:01.441453 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:40:01.451988 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:40:01.452067 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:40:01.479453 1798136 cri.go:89] found id: ""
	I1216 07:40:01.479477 1798136 logs.go:282] 0 containers: []
	W1216 07:40:01.479485 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:40:01.479492 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:40:01.479556 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:40:01.504912 1798136 cri.go:89] found id: ""
	I1216 07:40:01.504939 1798136 logs.go:282] 0 containers: []
	W1216 07:40:01.504948 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:40:01.504954 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:40:01.505016 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:40:01.532246 1798136 cri.go:89] found id: ""
	I1216 07:40:01.532294 1798136 logs.go:282] 0 containers: []
	W1216 07:40:01.532304 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:40:01.532312 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:40:01.532378 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:40:01.560132 1798136 cri.go:89] found id: ""
	I1216 07:40:01.560157 1798136 logs.go:282] 0 containers: []
	W1216 07:40:01.560166 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:40:01.560173 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:40:01.560235 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:40:01.587867 1798136 cri.go:89] found id: ""
	I1216 07:40:01.587894 1798136 logs.go:282] 0 containers: []
	W1216 07:40:01.587904 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:40:01.587911 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:40:01.587971 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:40:01.614102 1798136 cri.go:89] found id: ""
	I1216 07:40:01.614129 1798136 logs.go:282] 0 containers: []
	W1216 07:40:01.614138 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:40:01.614145 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:40:01.614225 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:40:01.649286 1798136 cri.go:89] found id: ""
	I1216 07:40:01.649364 1798136 logs.go:282] 0 containers: []
	W1216 07:40:01.649381 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:40:01.649388 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:40:01.649465 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:40:01.674954 1798136 cri.go:89] found id: ""
	I1216 07:40:01.674981 1798136 logs.go:282] 0 containers: []
	W1216 07:40:01.674990 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:40:01.675000 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:40:01.675011 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:40:01.706889 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:40:01.706920 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:40:01.776693 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:40:01.776731 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:40:01.793718 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:40:01.793751 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:40:01.857553 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:40:01.857580 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:40:01.857593 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:40:04.389901 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:40:04.399959 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:40:04.400080 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:40:04.427558 1798136 cri.go:89] found id: ""
	I1216 07:40:04.427585 1798136 logs.go:282] 0 containers: []
	W1216 07:40:04.427594 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:40:04.427600 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:40:04.427657 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:40:04.452240 1798136 cri.go:89] found id: ""
	I1216 07:40:04.452265 1798136 logs.go:282] 0 containers: []
	W1216 07:40:04.452286 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:40:04.452293 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:40:04.452353 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:40:04.478373 1798136 cri.go:89] found id: ""
	I1216 07:40:04.478398 1798136 logs.go:282] 0 containers: []
	W1216 07:40:04.478407 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:40:04.478413 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:40:04.478471 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:40:04.507304 1798136 cri.go:89] found id: ""
	I1216 07:40:04.507332 1798136 logs.go:282] 0 containers: []
	W1216 07:40:04.507341 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:40:04.507348 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:40:04.507414 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:40:04.533319 1798136 cri.go:89] found id: ""
	I1216 07:40:04.533346 1798136 logs.go:282] 0 containers: []
	W1216 07:40:04.533356 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:40:04.533362 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:40:04.533423 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:40:04.559572 1798136 cri.go:89] found id: ""
	I1216 07:40:04.559597 1798136 logs.go:282] 0 containers: []
	W1216 07:40:04.559606 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:40:04.559612 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:40:04.559673 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:40:04.585926 1798136 cri.go:89] found id: ""
	I1216 07:40:04.585950 1798136 logs.go:282] 0 containers: []
	W1216 07:40:04.585959 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:40:04.585965 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:40:04.586024 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:40:04.617257 1798136 cri.go:89] found id: ""
	I1216 07:40:04.617280 1798136 logs.go:282] 0 containers: []
	W1216 07:40:04.617289 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:40:04.617297 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:40:04.617308 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:40:04.685813 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:40:04.685848 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:40:04.702215 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:40:04.702297 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:40:04.767860 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:40:04.767883 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:40:04.767896 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:40:04.798477 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:40:04.798515 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:40:07.326907 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:40:07.337069 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:40:07.337141 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:40:07.366118 1798136 cri.go:89] found id: ""
	I1216 07:40:07.366150 1798136 logs.go:282] 0 containers: []
	W1216 07:40:07.366159 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:40:07.366166 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:40:07.366235 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:40:07.391942 1798136 cri.go:89] found id: ""
	I1216 07:40:07.391969 1798136 logs.go:282] 0 containers: []
	W1216 07:40:07.391978 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:40:07.391984 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:40:07.392051 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:40:07.420928 1798136 cri.go:89] found id: ""
	I1216 07:40:07.420998 1798136 logs.go:282] 0 containers: []
	W1216 07:40:07.421021 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:40:07.421041 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:40:07.421135 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:40:07.450639 1798136 cri.go:89] found id: ""
	I1216 07:40:07.450668 1798136 logs.go:282] 0 containers: []
	W1216 07:40:07.450677 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:40:07.450684 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:40:07.450744 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:40:07.479944 1798136 cri.go:89] found id: ""
	I1216 07:40:07.479967 1798136 logs.go:282] 0 containers: []
	W1216 07:40:07.479976 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:40:07.479982 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:40:07.480039 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:40:07.506900 1798136 cri.go:89] found id: ""
	I1216 07:40:07.506935 1798136 logs.go:282] 0 containers: []
	W1216 07:40:07.506943 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:40:07.506949 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:40:07.507021 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:40:07.533012 1798136 cri.go:89] found id: ""
	I1216 07:40:07.533040 1798136 logs.go:282] 0 containers: []
	W1216 07:40:07.533050 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:40:07.533056 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:40:07.533114 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:40:07.559433 1798136 cri.go:89] found id: ""
	I1216 07:40:07.559458 1798136 logs.go:282] 0 containers: []
	W1216 07:40:07.559467 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:40:07.559476 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:40:07.559488 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:40:07.628421 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:40:07.628464 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:40:07.644942 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:40:07.644976 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:40:07.709490 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:40:07.709512 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:40:07.709525 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:40:07.742415 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:40:07.742453 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:40:10.276642 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:40:10.287124 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:40:10.287232 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:40:10.316941 1798136 cri.go:89] found id: ""
	I1216 07:40:10.316965 1798136 logs.go:282] 0 containers: []
	W1216 07:40:10.316974 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:40:10.316980 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:40:10.317080 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:40:10.343203 1798136 cri.go:89] found id: ""
	I1216 07:40:10.343233 1798136 logs.go:282] 0 containers: []
	W1216 07:40:10.343243 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:40:10.343249 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:40:10.343308 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:40:10.370355 1798136 cri.go:89] found id: ""
	I1216 07:40:10.370425 1798136 logs.go:282] 0 containers: []
	W1216 07:40:10.370449 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:40:10.370463 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:40:10.370536 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:40:10.399771 1798136 cri.go:89] found id: ""
	I1216 07:40:10.399800 1798136 logs.go:282] 0 containers: []
	W1216 07:40:10.399809 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:40:10.399825 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:40:10.399887 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:40:10.429206 1798136 cri.go:89] found id: ""
	I1216 07:40:10.429276 1798136 logs.go:282] 0 containers: []
	W1216 07:40:10.429294 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:40:10.429301 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:40:10.429361 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:40:10.454191 1798136 cri.go:89] found id: ""
	I1216 07:40:10.454216 1798136 logs.go:282] 0 containers: []
	W1216 07:40:10.454226 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:40:10.454232 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:40:10.454294 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:40:10.480205 1798136 cri.go:89] found id: ""
	I1216 07:40:10.480230 1798136 logs.go:282] 0 containers: []
	W1216 07:40:10.480240 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:40:10.480248 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:40:10.480311 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:40:10.508064 1798136 cri.go:89] found id: ""
	I1216 07:40:10.508090 1798136 logs.go:282] 0 containers: []
	W1216 07:40:10.508098 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:40:10.508107 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:40:10.508119 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:40:10.576621 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:40:10.576659 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:40:10.593446 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:40:10.593477 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:40:10.667115 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:40:10.667186 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:40:10.667211 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:40:10.700649 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:40:10.700688 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:40:13.240210 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:40:13.250590 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:40:13.250667 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:40:13.280247 1798136 cri.go:89] found id: ""
	I1216 07:40:13.280271 1798136 logs.go:282] 0 containers: []
	W1216 07:40:13.280279 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:40:13.280285 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:40:13.280345 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:40:13.305580 1798136 cri.go:89] found id: ""
	I1216 07:40:13.305607 1798136 logs.go:282] 0 containers: []
	W1216 07:40:13.305617 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:40:13.305623 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:40:13.305682 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:40:13.336085 1798136 cri.go:89] found id: ""
	I1216 07:40:13.336110 1798136 logs.go:282] 0 containers: []
	W1216 07:40:13.336120 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:40:13.336127 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:40:13.336190 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:40:13.362359 1798136 cri.go:89] found id: ""
	I1216 07:40:13.362383 1798136 logs.go:282] 0 containers: []
	W1216 07:40:13.362393 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:40:13.362399 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:40:13.362462 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:40:13.391914 1798136 cri.go:89] found id: ""
	I1216 07:40:13.391942 1798136 logs.go:282] 0 containers: []
	W1216 07:40:13.391951 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:40:13.391957 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:40:13.392018 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:40:13.418477 1798136 cri.go:89] found id: ""
	I1216 07:40:13.418504 1798136 logs.go:282] 0 containers: []
	W1216 07:40:13.418525 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:40:13.418548 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:40:13.418622 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:40:13.449845 1798136 cri.go:89] found id: ""
	I1216 07:40:13.449869 1798136 logs.go:282] 0 containers: []
	W1216 07:40:13.449879 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:40:13.449885 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:40:13.449948 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:40:13.475124 1798136 cri.go:89] found id: ""
	I1216 07:40:13.475148 1798136 logs.go:282] 0 containers: []
	W1216 07:40:13.475158 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:40:13.475168 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:40:13.475180 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:40:13.542876 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:40:13.542917 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:40:13.559588 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:40:13.559618 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:40:13.624237 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:40:13.624258 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:40:13.624270 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:40:13.655429 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:40:13.655466 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:40:16.187653 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:40:16.203194 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:40:16.203276 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:40:16.229731 1798136 cri.go:89] found id: ""
	I1216 07:40:16.229755 1798136 logs.go:282] 0 containers: []
	W1216 07:40:16.229764 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:40:16.229770 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:40:16.229831 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:40:16.254799 1798136 cri.go:89] found id: ""
	I1216 07:40:16.254821 1798136 logs.go:282] 0 containers: []
	W1216 07:40:16.254830 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:40:16.254836 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:40:16.254891 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:40:16.281188 1798136 cri.go:89] found id: ""
	I1216 07:40:16.281213 1798136 logs.go:282] 0 containers: []
	W1216 07:40:16.281229 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:40:16.281236 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:40:16.281296 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:40:16.306366 1798136 cri.go:89] found id: ""
	I1216 07:40:16.306451 1798136 logs.go:282] 0 containers: []
	W1216 07:40:16.306474 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:40:16.306495 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:40:16.306579 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:40:16.337286 1798136 cri.go:89] found id: ""
	I1216 07:40:16.337320 1798136 logs.go:282] 0 containers: []
	W1216 07:40:16.337335 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:40:16.337342 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:40:16.337418 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:40:16.370070 1798136 cri.go:89] found id: ""
	I1216 07:40:16.370094 1798136 logs.go:282] 0 containers: []
	W1216 07:40:16.370103 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:40:16.370109 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:40:16.370167 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:40:16.399387 1798136 cri.go:89] found id: ""
	I1216 07:40:16.399424 1798136 logs.go:282] 0 containers: []
	W1216 07:40:16.399434 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:40:16.399441 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:40:16.399516 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:40:16.427378 1798136 cri.go:89] found id: ""
	I1216 07:40:16.427405 1798136 logs.go:282] 0 containers: []
	W1216 07:40:16.427414 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:40:16.427423 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:40:16.427434 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:40:16.501457 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:40:16.501497 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:40:16.517980 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:40:16.518008 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:40:16.584181 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:40:16.584258 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:40:16.584286 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:40:16.614845 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:40:16.614880 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:40:19.148607 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:40:19.162566 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:40:19.162640 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:40:19.191286 1798136 cri.go:89] found id: ""
	I1216 07:40:19.191327 1798136 logs.go:282] 0 containers: []
	W1216 07:40:19.191336 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:40:19.191343 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:40:19.191413 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:40:19.221820 1798136 cri.go:89] found id: ""
	I1216 07:40:19.221842 1798136 logs.go:282] 0 containers: []
	W1216 07:40:19.221851 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:40:19.221857 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:40:19.221915 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:40:19.246905 1798136 cri.go:89] found id: ""
	I1216 07:40:19.246930 1798136 logs.go:282] 0 containers: []
	W1216 07:40:19.246939 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:40:19.246946 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:40:19.247006 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:40:19.271785 1798136 cri.go:89] found id: ""
	I1216 07:40:19.271812 1798136 logs.go:282] 0 containers: []
	W1216 07:40:19.271836 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:40:19.271842 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:40:19.271918 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:40:19.297917 1798136 cri.go:89] found id: ""
	I1216 07:40:19.297942 1798136 logs.go:282] 0 containers: []
	W1216 07:40:19.297951 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:40:19.297957 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:40:19.298015 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:40:19.324930 1798136 cri.go:89] found id: ""
	I1216 07:40:19.325003 1798136 logs.go:282] 0 containers: []
	W1216 07:40:19.325029 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:40:19.325045 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:40:19.325119 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:40:19.350941 1798136 cri.go:89] found id: ""
	I1216 07:40:19.350981 1798136 logs.go:282] 0 containers: []
	W1216 07:40:19.350991 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:40:19.350997 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:40:19.351072 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:40:19.380909 1798136 cri.go:89] found id: ""
	I1216 07:40:19.380947 1798136 logs.go:282] 0 containers: []
	W1216 07:40:19.380956 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:40:19.380966 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:40:19.380977 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:40:19.449788 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:40:19.449811 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:40:19.449828 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:40:19.480449 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:40:19.480590 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:40:19.513984 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:40:19.514014 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:40:19.587910 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:40:19.587949 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:40:22.104605 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:40:22.115329 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:40:22.115410 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:40:22.142838 1798136 cri.go:89] found id: ""
	I1216 07:40:22.142862 1798136 logs.go:282] 0 containers: []
	W1216 07:40:22.142871 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:40:22.142877 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:40:22.142936 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:40:22.181878 1798136 cri.go:89] found id: ""
	I1216 07:40:22.181957 1798136 logs.go:282] 0 containers: []
	W1216 07:40:22.181980 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:40:22.182001 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:40:22.182119 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:40:22.217369 1798136 cri.go:89] found id: ""
	I1216 07:40:22.217392 1798136 logs.go:282] 0 containers: []
	W1216 07:40:22.217400 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:40:22.217406 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:40:22.217466 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:40:22.243977 1798136 cri.go:89] found id: ""
	I1216 07:40:22.244005 1798136 logs.go:282] 0 containers: []
	W1216 07:40:22.244013 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:40:22.244019 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:40:22.244080 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:40:22.269395 1798136 cri.go:89] found id: ""
	I1216 07:40:22.269419 1798136 logs.go:282] 0 containers: []
	W1216 07:40:22.269427 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:40:22.269433 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:40:22.269498 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:40:22.298913 1798136 cri.go:89] found id: ""
	I1216 07:40:22.298988 1798136 logs.go:282] 0 containers: []
	W1216 07:40:22.299011 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:40:22.299033 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:40:22.299126 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:40:22.324855 1798136 cri.go:89] found id: ""
	I1216 07:40:22.324877 1798136 logs.go:282] 0 containers: []
	W1216 07:40:22.324885 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:40:22.324891 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:40:22.324957 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:40:22.351245 1798136 cri.go:89] found id: ""
	I1216 07:40:22.351318 1798136 logs.go:282] 0 containers: []
	W1216 07:40:22.351341 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:40:22.351365 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:40:22.351401 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:40:22.368170 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:40:22.368200 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:40:22.432037 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:40:22.432057 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:40:22.432097 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:40:22.467843 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:40:22.467919 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:40:22.499618 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:40:22.499646 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:40:25.068652 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:40:25.080360 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:40:25.080438 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:40:25.110126 1798136 cri.go:89] found id: ""
	I1216 07:40:25.110160 1798136 logs.go:282] 0 containers: []
	W1216 07:40:25.110170 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:40:25.110177 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:40:25.110241 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:40:25.138638 1798136 cri.go:89] found id: ""
	I1216 07:40:25.138666 1798136 logs.go:282] 0 containers: []
	W1216 07:40:25.138676 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:40:25.138685 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:40:25.138748 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:40:25.167924 1798136 cri.go:89] found id: ""
	I1216 07:40:25.167991 1798136 logs.go:282] 0 containers: []
	W1216 07:40:25.168016 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:40:25.168036 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:40:25.168124 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:40:25.200163 1798136 cri.go:89] found id: ""
	I1216 07:40:25.200195 1798136 logs.go:282] 0 containers: []
	W1216 07:40:25.200205 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:40:25.200212 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:40:25.200304 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:40:25.229127 1798136 cri.go:89] found id: ""
	I1216 07:40:25.229151 1798136 logs.go:282] 0 containers: []
	W1216 07:40:25.229160 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:40:25.229166 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:40:25.229226 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:40:25.254474 1798136 cri.go:89] found id: ""
	I1216 07:40:25.254498 1798136 logs.go:282] 0 containers: []
	W1216 07:40:25.254507 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:40:25.254512 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:40:25.254574 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:40:25.284068 1798136 cri.go:89] found id: ""
	I1216 07:40:25.284095 1798136 logs.go:282] 0 containers: []
	W1216 07:40:25.284104 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:40:25.284111 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:40:25.284172 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:40:25.312937 1798136 cri.go:89] found id: ""
	I1216 07:40:25.312962 1798136 logs.go:282] 0 containers: []
	W1216 07:40:25.312971 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:40:25.312979 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:40:25.312991 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:40:25.386078 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:40:25.386119 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:40:25.402438 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:40:25.402605 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:40:25.468622 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:40:25.468643 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:40:25.468657 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:40:25.500287 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:40:25.500321 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:40:28.030313 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:40:28.040391 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:40:28.040460 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:40:28.067120 1798136 cri.go:89] found id: ""
	I1216 07:40:28.067143 1798136 logs.go:282] 0 containers: []
	W1216 07:40:28.067153 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:40:28.067159 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:40:28.067226 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:40:28.093058 1798136 cri.go:89] found id: ""
	I1216 07:40:28.093082 1798136 logs.go:282] 0 containers: []
	W1216 07:40:28.093091 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:40:28.093097 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:40:28.093166 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:40:28.119207 1798136 cri.go:89] found id: ""
	I1216 07:40:28.119232 1798136 logs.go:282] 0 containers: []
	W1216 07:40:28.119241 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:40:28.119247 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:40:28.119313 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:40:28.145614 1798136 cri.go:89] found id: ""
	I1216 07:40:28.145636 1798136 logs.go:282] 0 containers: []
	W1216 07:40:28.145645 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:40:28.145658 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:40:28.145716 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:40:28.184800 1798136 cri.go:89] found id: ""
	I1216 07:40:28.184824 1798136 logs.go:282] 0 containers: []
	W1216 07:40:28.184832 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:40:28.184839 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:40:28.184899 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:40:28.218781 1798136 cri.go:89] found id: ""
	I1216 07:40:28.218804 1798136 logs.go:282] 0 containers: []
	W1216 07:40:28.218814 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:40:28.218821 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:40:28.218881 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:40:28.246255 1798136 cri.go:89] found id: ""
	I1216 07:40:28.246277 1798136 logs.go:282] 0 containers: []
	W1216 07:40:28.246285 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:40:28.246291 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:40:28.246352 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:40:28.271817 1798136 cri.go:89] found id: ""
	I1216 07:40:28.271839 1798136 logs.go:282] 0 containers: []
	W1216 07:40:28.271848 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:40:28.271857 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:40:28.271869 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:40:28.307190 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:40:28.307220 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:40:28.374308 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:40:28.374349 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:40:28.390982 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:40:28.391012 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:40:28.461890 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:40:28.461914 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:40:28.461927 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:40:30.992826 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:40:31.004315 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:40:31.004403 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:40:31.035579 1798136 cri.go:89] found id: ""
	I1216 07:40:31.035608 1798136 logs.go:282] 0 containers: []
	W1216 07:40:31.035617 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:40:31.035623 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:40:31.035682 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:40:31.061312 1798136 cri.go:89] found id: ""
	I1216 07:40:31.061342 1798136 logs.go:282] 0 containers: []
	W1216 07:40:31.061351 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:40:31.061358 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:40:31.061419 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:40:31.089729 1798136 cri.go:89] found id: ""
	I1216 07:40:31.089753 1798136 logs.go:282] 0 containers: []
	W1216 07:40:31.089761 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:40:31.089783 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:40:31.089844 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:40:31.119084 1798136 cri.go:89] found id: ""
	I1216 07:40:31.119108 1798136 logs.go:282] 0 containers: []
	W1216 07:40:31.119118 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:40:31.119124 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:40:31.119196 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:40:31.168607 1798136 cri.go:89] found id: ""
	I1216 07:40:31.168662 1798136 logs.go:282] 0 containers: []
	W1216 07:40:31.168672 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:40:31.168685 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:40:31.168789 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:40:31.213083 1798136 cri.go:89] found id: ""
	I1216 07:40:31.213115 1798136 logs.go:282] 0 containers: []
	W1216 07:40:31.213124 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:40:31.213135 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:40:31.213207 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:40:31.242674 1798136 cri.go:89] found id: ""
	I1216 07:40:31.242703 1798136 logs.go:282] 0 containers: []
	W1216 07:40:31.242712 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:40:31.242719 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:40:31.242777 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:40:31.268211 1798136 cri.go:89] found id: ""
	I1216 07:40:31.268239 1798136 logs.go:282] 0 containers: []
	W1216 07:40:31.268248 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:40:31.268256 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:40:31.268268 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:40:31.302059 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:40:31.302092 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:40:31.372785 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:40:31.372825 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:40:31.389253 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:40:31.389290 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:40:31.467836 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:40:31.467905 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:40:31.467933 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:40:33.999602 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:40:34.013902 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:40:34.013981 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:40:34.054728 1798136 cri.go:89] found id: ""
	I1216 07:40:34.054762 1798136 logs.go:282] 0 containers: []
	W1216 07:40:34.054773 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:40:34.054779 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:40:34.054840 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:40:34.090651 1798136 cri.go:89] found id: ""
	I1216 07:40:34.090675 1798136 logs.go:282] 0 containers: []
	W1216 07:40:34.090686 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:40:34.090692 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:40:34.090756 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:40:34.127919 1798136 cri.go:89] found id: ""
	I1216 07:40:34.127946 1798136 logs.go:282] 0 containers: []
	W1216 07:40:34.127955 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:40:34.127961 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:40:34.128022 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:40:34.227898 1798136 cri.go:89] found id: ""
	I1216 07:40:34.227926 1798136 logs.go:282] 0 containers: []
	W1216 07:40:34.227935 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:40:34.227941 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:40:34.228006 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:40:34.285470 1798136 cri.go:89] found id: ""
	I1216 07:40:34.285499 1798136 logs.go:282] 0 containers: []
	W1216 07:40:34.285508 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:40:34.285514 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:40:34.285584 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:40:34.320888 1798136 cri.go:89] found id: ""
	I1216 07:40:34.320916 1798136 logs.go:282] 0 containers: []
	W1216 07:40:34.320925 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:40:34.320934 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:40:34.321000 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:40:34.359963 1798136 cri.go:89] found id: ""
	I1216 07:40:34.359989 1798136 logs.go:282] 0 containers: []
	W1216 07:40:34.359998 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:40:34.360005 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:40:34.360061 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:40:34.395762 1798136 cri.go:89] found id: ""
	I1216 07:40:34.395789 1798136 logs.go:282] 0 containers: []
	W1216 07:40:34.395798 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:40:34.395807 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:40:34.395819 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:40:34.477431 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:40:34.477470 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:40:34.494927 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:40:34.495014 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:40:34.564086 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:40:34.564105 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:40:34.564118 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:40:34.594578 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:40:34.594617 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:40:37.124625 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:40:37.137140 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:40:37.137212 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:40:37.216703 1798136 cri.go:89] found id: ""
	I1216 07:40:37.216731 1798136 logs.go:282] 0 containers: []
	W1216 07:40:37.216740 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:40:37.216746 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:40:37.216803 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:40:37.288180 1798136 cri.go:89] found id: ""
	I1216 07:40:37.288214 1798136 logs.go:282] 0 containers: []
	W1216 07:40:37.288232 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:40:37.288238 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:40:37.288298 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:40:37.326949 1798136 cri.go:89] found id: ""
	I1216 07:40:37.326978 1798136 logs.go:282] 0 containers: []
	W1216 07:40:37.326986 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:40:37.326992 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:40:37.327060 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:40:37.357913 1798136 cri.go:89] found id: ""
	I1216 07:40:37.357940 1798136 logs.go:282] 0 containers: []
	W1216 07:40:37.357949 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:40:37.357956 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:40:37.358023 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:40:37.390493 1798136 cri.go:89] found id: ""
	I1216 07:40:37.390519 1798136 logs.go:282] 0 containers: []
	W1216 07:40:37.390531 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:40:37.390537 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:40:37.390611 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:40:37.427860 1798136 cri.go:89] found id: ""
	I1216 07:40:37.427894 1798136 logs.go:282] 0 containers: []
	W1216 07:40:37.427904 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:40:37.427911 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:40:37.427982 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:40:37.474952 1798136 cri.go:89] found id: ""
	I1216 07:40:37.474978 1798136 logs.go:282] 0 containers: []
	W1216 07:40:37.475005 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:40:37.475015 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:40:37.475091 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:40:37.523097 1798136 cri.go:89] found id: ""
	I1216 07:40:37.523120 1798136 logs.go:282] 0 containers: []
	W1216 07:40:37.523128 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:40:37.523137 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:40:37.523150 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:40:37.559422 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:40:37.559453 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:40:37.594162 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:40:37.594189 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:40:37.667349 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:40:37.667390 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:40:37.687968 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:40:37.688003 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:40:37.777837 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:40:40.278980 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:40:40.289204 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:40:40.289280 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:40:40.315713 1798136 cri.go:89] found id: ""
	I1216 07:40:40.315738 1798136 logs.go:282] 0 containers: []
	W1216 07:40:40.315748 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:40:40.315754 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:40:40.315816 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:40:40.340143 1798136 cri.go:89] found id: ""
	I1216 07:40:40.340166 1798136 logs.go:282] 0 containers: []
	W1216 07:40:40.340174 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:40:40.340181 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:40:40.340239 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:40:40.365260 1798136 cri.go:89] found id: ""
	I1216 07:40:40.365284 1798136 logs.go:282] 0 containers: []
	W1216 07:40:40.365292 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:40:40.365298 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:40:40.365358 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:40:40.391481 1798136 cri.go:89] found id: ""
	I1216 07:40:40.391505 1798136 logs.go:282] 0 containers: []
	W1216 07:40:40.391513 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:40:40.391523 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:40:40.391667 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:40:40.417186 1798136 cri.go:89] found id: ""
	I1216 07:40:40.417210 1798136 logs.go:282] 0 containers: []
	W1216 07:40:40.417219 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:40:40.417226 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:40:40.417284 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:40:40.442996 1798136 cri.go:89] found id: ""
	I1216 07:40:40.443026 1798136 logs.go:282] 0 containers: []
	W1216 07:40:40.443034 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:40:40.443040 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:40:40.443099 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:40:40.468063 1798136 cri.go:89] found id: ""
	I1216 07:40:40.468086 1798136 logs.go:282] 0 containers: []
	W1216 07:40:40.468095 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:40:40.468101 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:40:40.468159 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:40:40.498284 1798136 cri.go:89] found id: ""
	I1216 07:40:40.498311 1798136 logs.go:282] 0 containers: []
	W1216 07:40:40.498320 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:40:40.498329 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:40:40.498342 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:40:40.570673 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:40:40.570717 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:40:40.593598 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:40:40.593632 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:40:40.688327 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:40:40.688362 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:40:40.688375 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:40:40.727453 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:40:40.727582 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:40:43.278400 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:40:43.289009 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:40:43.289083 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:40:43.320846 1798136 cri.go:89] found id: ""
	I1216 07:40:43.320878 1798136 logs.go:282] 0 containers: []
	W1216 07:40:43.320887 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:40:43.320894 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:40:43.320955 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:40:43.345748 1798136 cri.go:89] found id: ""
	I1216 07:40:43.345776 1798136 logs.go:282] 0 containers: []
	W1216 07:40:43.345786 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:40:43.345792 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:40:43.345852 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:40:43.372171 1798136 cri.go:89] found id: ""
	I1216 07:40:43.372194 1798136 logs.go:282] 0 containers: []
	W1216 07:40:43.372203 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:40:43.372209 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:40:43.372292 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:40:43.397864 1798136 cri.go:89] found id: ""
	I1216 07:40:43.397888 1798136 logs.go:282] 0 containers: []
	W1216 07:40:43.397896 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:40:43.397903 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:40:43.397974 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:40:43.423870 1798136 cri.go:89] found id: ""
	I1216 07:40:43.423899 1798136 logs.go:282] 0 containers: []
	W1216 07:40:43.423914 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:40:43.423921 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:40:43.423982 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:40:43.450794 1798136 cri.go:89] found id: ""
	I1216 07:40:43.450817 1798136 logs.go:282] 0 containers: []
	W1216 07:40:43.450826 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:40:43.450832 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:40:43.450890 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:40:43.481510 1798136 cri.go:89] found id: ""
	I1216 07:40:43.481538 1798136 logs.go:282] 0 containers: []
	W1216 07:40:43.481548 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:40:43.481554 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:40:43.481624 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:40:43.511567 1798136 cri.go:89] found id: ""
	I1216 07:40:43.511590 1798136 logs.go:282] 0 containers: []
	W1216 07:40:43.511598 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:40:43.511608 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:40:43.511620 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:40:43.578849 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:40:43.578887 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:40:43.596587 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:40:43.596680 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:40:43.671235 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:40:43.671302 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:40:43.671330 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:40:43.702877 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:40:43.702915 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:40:46.236160 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:40:46.246430 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:40:46.246517 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:40:46.275563 1798136 cri.go:89] found id: ""
	I1216 07:40:46.275589 1798136 logs.go:282] 0 containers: []
	W1216 07:40:46.275598 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:40:46.275604 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:40:46.275662 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:40:46.305665 1798136 cri.go:89] found id: ""
	I1216 07:40:46.305691 1798136 logs.go:282] 0 containers: []
	W1216 07:40:46.305700 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:40:46.305707 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:40:46.305766 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:40:46.332679 1798136 cri.go:89] found id: ""
	I1216 07:40:46.332708 1798136 logs.go:282] 0 containers: []
	W1216 07:40:46.332718 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:40:46.332724 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:40:46.332785 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:40:46.358034 1798136 cri.go:89] found id: ""
	I1216 07:40:46.358060 1798136 logs.go:282] 0 containers: []
	W1216 07:40:46.358070 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:40:46.358078 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:40:46.358155 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:40:46.387265 1798136 cri.go:89] found id: ""
	I1216 07:40:46.387292 1798136 logs.go:282] 0 containers: []
	W1216 07:40:46.387303 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:40:46.387309 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:40:46.387380 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:40:46.412727 1798136 cri.go:89] found id: ""
	I1216 07:40:46.412752 1798136 logs.go:282] 0 containers: []
	W1216 07:40:46.412766 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:40:46.412793 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:40:46.412858 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:40:46.439739 1798136 cri.go:89] found id: ""
	I1216 07:40:46.439765 1798136 logs.go:282] 0 containers: []
	W1216 07:40:46.439773 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:40:46.439780 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:40:46.439837 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:40:46.466075 1798136 cri.go:89] found id: ""
	I1216 07:40:46.466100 1798136 logs.go:282] 0 containers: []
	W1216 07:40:46.466110 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:40:46.466119 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:40:46.466131 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:40:46.536725 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:40:46.536769 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:40:46.553726 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:40:46.553759 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:40:46.624195 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:40:46.624214 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:40:46.624232 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:40:46.660327 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:40:46.660373 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:40:49.192604 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:40:49.208349 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:40:49.208421 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:40:49.235992 1798136 cri.go:89] found id: ""
	I1216 07:40:49.236017 1798136 logs.go:282] 0 containers: []
	W1216 07:40:49.236025 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:40:49.236031 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:40:49.236094 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:40:49.262181 1798136 cri.go:89] found id: ""
	I1216 07:40:49.262206 1798136 logs.go:282] 0 containers: []
	W1216 07:40:49.262215 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:40:49.262221 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:40:49.262279 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:40:49.290214 1798136 cri.go:89] found id: ""
	I1216 07:40:49.290239 1798136 logs.go:282] 0 containers: []
	W1216 07:40:49.290247 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:40:49.290253 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:40:49.290312 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:40:49.316207 1798136 cri.go:89] found id: ""
	I1216 07:40:49.316232 1798136 logs.go:282] 0 containers: []
	W1216 07:40:49.316240 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:40:49.316247 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:40:49.316306 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:40:49.345549 1798136 cri.go:89] found id: ""
	I1216 07:40:49.345575 1798136 logs.go:282] 0 containers: []
	W1216 07:40:49.345584 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:40:49.345590 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:40:49.345646 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:40:49.371035 1798136 cri.go:89] found id: ""
	I1216 07:40:49.371111 1798136 logs.go:282] 0 containers: []
	W1216 07:40:49.371133 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:40:49.371153 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:40:49.371242 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:40:49.397173 1798136 cri.go:89] found id: ""
	I1216 07:40:49.397205 1798136 logs.go:282] 0 containers: []
	W1216 07:40:49.397215 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:40:49.397222 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:40:49.397285 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:40:49.426511 1798136 cri.go:89] found id: ""
	I1216 07:40:49.426536 1798136 logs.go:282] 0 containers: []
	W1216 07:40:49.426545 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:40:49.426573 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:40:49.426595 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:40:49.454517 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:40:49.454546 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:40:49.521066 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:40:49.521112 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:40:49.537883 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:40:49.537920 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:40:49.605112 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:40:49.605137 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:40:49.605150 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:40:52.136696 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:40:52.147958 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:40:52.148029 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:40:52.181309 1798136 cri.go:89] found id: ""
	I1216 07:40:52.181336 1798136 logs.go:282] 0 containers: []
	W1216 07:40:52.181345 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:40:52.181351 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:40:52.181420 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:40:52.217625 1798136 cri.go:89] found id: ""
	I1216 07:40:52.217648 1798136 logs.go:282] 0 containers: []
	W1216 07:40:52.217657 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:40:52.217663 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:40:52.217720 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:40:52.244250 1798136 cri.go:89] found id: ""
	I1216 07:40:52.244274 1798136 logs.go:282] 0 containers: []
	W1216 07:40:52.244282 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:40:52.244288 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:40:52.244345 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:40:52.270799 1798136 cri.go:89] found id: ""
	I1216 07:40:52.270823 1798136 logs.go:282] 0 containers: []
	W1216 07:40:52.270832 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:40:52.270838 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:40:52.270900 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:40:52.301432 1798136 cri.go:89] found id: ""
	I1216 07:40:52.301457 1798136 logs.go:282] 0 containers: []
	W1216 07:40:52.301467 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:40:52.301473 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:40:52.301537 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:40:52.332029 1798136 cri.go:89] found id: ""
	I1216 07:40:52.332095 1798136 logs.go:282] 0 containers: []
	W1216 07:40:52.332118 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:40:52.332139 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:40:52.332213 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:40:52.358023 1798136 cri.go:89] found id: ""
	I1216 07:40:52.358045 1798136 logs.go:282] 0 containers: []
	W1216 07:40:52.358055 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:40:52.358061 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:40:52.358118 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:40:52.383750 1798136 cri.go:89] found id: ""
	I1216 07:40:52.383776 1798136 logs.go:282] 0 containers: []
	W1216 07:40:52.383786 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:40:52.383795 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:40:52.383807 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:40:52.411464 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:40:52.411494 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:40:52.479951 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:40:52.479989 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:40:52.496463 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:40:52.496525 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:40:52.560571 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:40:52.560605 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:40:52.560618 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:40:55.093534 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:40:55.107869 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:40:55.107961 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:40:55.165390 1798136 cri.go:89] found id: ""
	I1216 07:40:55.165420 1798136 logs.go:282] 0 containers: []
	W1216 07:40:55.165430 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:40:55.165436 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:40:55.165500 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:40:55.213900 1798136 cri.go:89] found id: ""
	I1216 07:40:55.213924 1798136 logs.go:282] 0 containers: []
	W1216 07:40:55.213932 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:40:55.213938 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:40:55.213996 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:40:55.247503 1798136 cri.go:89] found id: ""
	I1216 07:40:55.247543 1798136 logs.go:282] 0 containers: []
	W1216 07:40:55.247556 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:40:55.247568 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:40:55.247635 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:40:55.283491 1798136 cri.go:89] found id: ""
	I1216 07:40:55.283520 1798136 logs.go:282] 0 containers: []
	W1216 07:40:55.283529 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:40:55.283541 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:40:55.283618 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:40:55.309983 1798136 cri.go:89] found id: ""
	I1216 07:40:55.310004 1798136 logs.go:282] 0 containers: []
	W1216 07:40:55.310013 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:40:55.310019 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:40:55.310076 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:40:55.335483 1798136 cri.go:89] found id: ""
	I1216 07:40:55.335505 1798136 logs.go:282] 0 containers: []
	W1216 07:40:55.335513 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:40:55.335520 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:40:55.335577 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:40:55.361503 1798136 cri.go:89] found id: ""
	I1216 07:40:55.361582 1798136 logs.go:282] 0 containers: []
	W1216 07:40:55.361616 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:40:55.361642 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:40:55.361727 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:40:55.387460 1798136 cri.go:89] found id: ""
	I1216 07:40:55.387499 1798136 logs.go:282] 0 containers: []
	W1216 07:40:55.387509 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:40:55.387536 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:40:55.387557 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:40:55.419062 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:40:55.419090 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:40:55.492436 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:40:55.492482 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:40:55.509133 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:40:55.509164 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:40:55.577440 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:40:55.577472 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:40:55.577485 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:40:58.109282 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:40:58.120320 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:40:58.120398 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:40:58.174024 1798136 cri.go:89] found id: ""
	I1216 07:40:58.174053 1798136 logs.go:282] 0 containers: []
	W1216 07:40:58.174063 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:40:58.174070 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:40:58.174126 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:40:58.238466 1798136 cri.go:89] found id: ""
	I1216 07:40:58.238495 1798136 logs.go:282] 0 containers: []
	W1216 07:40:58.238504 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:40:58.238510 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:40:58.238570 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:40:58.326001 1798136 cri.go:89] found id: ""
	I1216 07:40:58.326056 1798136 logs.go:282] 0 containers: []
	W1216 07:40:58.326072 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:40:58.326079 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:40:58.326148 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:40:58.358289 1798136 cri.go:89] found id: ""
	I1216 07:40:58.358316 1798136 logs.go:282] 0 containers: []
	W1216 07:40:58.358325 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:40:58.358331 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:40:58.358395 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:40:58.387592 1798136 cri.go:89] found id: ""
	I1216 07:40:58.387646 1798136 logs.go:282] 0 containers: []
	W1216 07:40:58.387656 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:40:58.387662 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:40:58.387729 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:40:58.418190 1798136 cri.go:89] found id: ""
	I1216 07:40:58.418220 1798136 logs.go:282] 0 containers: []
	W1216 07:40:58.418229 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:40:58.418236 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:40:58.418296 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:40:58.474810 1798136 cri.go:89] found id: ""
	I1216 07:40:58.474839 1798136 logs.go:282] 0 containers: []
	W1216 07:40:58.474848 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:40:58.474854 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:40:58.474918 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:40:58.522039 1798136 cri.go:89] found id: ""
	I1216 07:40:58.522068 1798136 logs.go:282] 0 containers: []
	W1216 07:40:58.522077 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:40:58.522085 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:40:58.522097 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:40:58.614688 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:40:58.614796 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:40:58.640056 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:40:58.640085 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:40:58.750193 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:40:58.750211 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:40:58.750235 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:40:58.789206 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:40:58.789242 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:41:01.338822 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:41:01.349121 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:41:01.349197 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:41:01.374695 1798136 cri.go:89] found id: ""
	I1216 07:41:01.374725 1798136 logs.go:282] 0 containers: []
	W1216 07:41:01.374734 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:41:01.374740 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:41:01.374798 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:41:01.400319 1798136 cri.go:89] found id: ""
	I1216 07:41:01.400344 1798136 logs.go:282] 0 containers: []
	W1216 07:41:01.400352 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:41:01.400359 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:41:01.400442 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:41:01.426289 1798136 cri.go:89] found id: ""
	I1216 07:41:01.426316 1798136 logs.go:282] 0 containers: []
	W1216 07:41:01.426325 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:41:01.426331 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:41:01.426390 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:41:01.455886 1798136 cri.go:89] found id: ""
	I1216 07:41:01.455913 1798136 logs.go:282] 0 containers: []
	W1216 07:41:01.455922 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:41:01.455928 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:41:01.455985 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:41:01.482448 1798136 cri.go:89] found id: ""
	I1216 07:41:01.482473 1798136 logs.go:282] 0 containers: []
	W1216 07:41:01.482483 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:41:01.482489 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:41:01.482584 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:41:01.514900 1798136 cri.go:89] found id: ""
	I1216 07:41:01.514931 1798136 logs.go:282] 0 containers: []
	W1216 07:41:01.514941 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:41:01.514947 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:41:01.515006 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:41:01.545995 1798136 cri.go:89] found id: ""
	I1216 07:41:01.546021 1798136 logs.go:282] 0 containers: []
	W1216 07:41:01.546030 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:41:01.546037 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:41:01.546094 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:41:01.573900 1798136 cri.go:89] found id: ""
	I1216 07:41:01.573924 1798136 logs.go:282] 0 containers: []
	W1216 07:41:01.573933 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:41:01.573941 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:41:01.573954 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:41:01.590966 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:41:01.591049 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:41:01.658256 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:41:01.658280 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:41:01.658306 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:41:01.688778 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:41:01.688814 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:41:01.720823 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:41:01.720843 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:41:04.287354 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:41:04.298022 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:41:04.298131 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:41:04.327869 1798136 cri.go:89] found id: ""
	I1216 07:41:04.327938 1798136 logs.go:282] 0 containers: []
	W1216 07:41:04.327960 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:41:04.327967 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:41:04.328029 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:41:04.354886 1798136 cri.go:89] found id: ""
	I1216 07:41:04.354964 1798136 logs.go:282] 0 containers: []
	W1216 07:41:04.354980 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:41:04.354987 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:41:04.355053 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:41:04.385586 1798136 cri.go:89] found id: ""
	I1216 07:41:04.385693 1798136 logs.go:282] 0 containers: []
	W1216 07:41:04.385719 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:41:04.385727 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:41:04.385800 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:41:04.413413 1798136 cri.go:89] found id: ""
	I1216 07:41:04.413449 1798136 logs.go:282] 0 containers: []
	W1216 07:41:04.413458 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:41:04.413465 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:41:04.413546 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:41:04.440305 1798136 cri.go:89] found id: ""
	I1216 07:41:04.440341 1798136 logs.go:282] 0 containers: []
	W1216 07:41:04.440351 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:41:04.440358 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:41:04.440427 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:41:04.466398 1798136 cri.go:89] found id: ""
	I1216 07:41:04.466428 1798136 logs.go:282] 0 containers: []
	W1216 07:41:04.466438 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:41:04.466446 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:41:04.466505 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:41:04.494402 1798136 cri.go:89] found id: ""
	I1216 07:41:04.494427 1798136 logs.go:282] 0 containers: []
	W1216 07:41:04.494436 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:41:04.494442 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:41:04.494507 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:41:04.520243 1798136 cri.go:89] found id: ""
	I1216 07:41:04.520266 1798136 logs.go:282] 0 containers: []
	W1216 07:41:04.520275 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:41:04.520284 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:41:04.520295 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:41:04.590225 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:41:04.590263 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:41:04.607678 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:41:04.607713 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:41:04.676985 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:41:04.677018 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:41:04.677032 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:41:04.709236 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:41:04.709274 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:41:07.238872 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:41:07.250094 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:41:07.250158 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:41:07.281020 1798136 cri.go:89] found id: ""
	I1216 07:41:07.281041 1798136 logs.go:282] 0 containers: []
	W1216 07:41:07.281050 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:41:07.281056 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:41:07.281113 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:41:07.315335 1798136 cri.go:89] found id: ""
	I1216 07:41:07.315358 1798136 logs.go:282] 0 containers: []
	W1216 07:41:07.315366 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:41:07.315371 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:41:07.315427 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:41:07.348082 1798136 cri.go:89] found id: ""
	I1216 07:41:07.348105 1798136 logs.go:282] 0 containers: []
	W1216 07:41:07.348114 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:41:07.348120 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:41:07.348183 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:41:07.380325 1798136 cri.go:89] found id: ""
	I1216 07:41:07.380347 1798136 logs.go:282] 0 containers: []
	W1216 07:41:07.380356 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:41:07.380362 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:41:07.380417 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:41:07.409469 1798136 cri.go:89] found id: ""
	I1216 07:41:07.409491 1798136 logs.go:282] 0 containers: []
	W1216 07:41:07.409500 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:41:07.409505 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:41:07.409564 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:41:07.453991 1798136 cri.go:89] found id: ""
	I1216 07:41:07.454012 1798136 logs.go:282] 0 containers: []
	W1216 07:41:07.454020 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:41:07.454028 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:41:07.454083 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:41:07.482682 1798136 cri.go:89] found id: ""
	I1216 07:41:07.482703 1798136 logs.go:282] 0 containers: []
	W1216 07:41:07.482711 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:41:07.482717 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:41:07.482773 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:41:07.515124 1798136 cri.go:89] found id: ""
	I1216 07:41:07.515150 1798136 logs.go:282] 0 containers: []
	W1216 07:41:07.515159 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:41:07.515167 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:41:07.515179 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:41:07.590882 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:41:07.590968 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:41:07.607502 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:41:07.607529 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:41:07.673464 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:41:07.673527 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:41:07.673555 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:41:07.704655 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:41:07.704690 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:41:10.237685 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:41:10.248170 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:41:10.248241 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:41:10.279461 1798136 cri.go:89] found id: ""
	I1216 07:41:10.279484 1798136 logs.go:282] 0 containers: []
	W1216 07:41:10.279493 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:41:10.279499 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:41:10.279556 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:41:10.305459 1798136 cri.go:89] found id: ""
	I1216 07:41:10.305481 1798136 logs.go:282] 0 containers: []
	W1216 07:41:10.305490 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:41:10.305495 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:41:10.305551 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:41:10.331213 1798136 cri.go:89] found id: ""
	I1216 07:41:10.331277 1798136 logs.go:282] 0 containers: []
	W1216 07:41:10.331300 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:41:10.331321 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:41:10.331411 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:41:10.361871 1798136 cri.go:89] found id: ""
	I1216 07:41:10.361893 1798136 logs.go:282] 0 containers: []
	W1216 07:41:10.361902 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:41:10.361907 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:41:10.361966 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:41:10.387081 1798136 cri.go:89] found id: ""
	I1216 07:41:10.387109 1798136 logs.go:282] 0 containers: []
	W1216 07:41:10.387131 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:41:10.387137 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:41:10.387199 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:41:10.412956 1798136 cri.go:89] found id: ""
	I1216 07:41:10.412978 1798136 logs.go:282] 0 containers: []
	W1216 07:41:10.412986 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:41:10.412993 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:41:10.413050 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:41:10.438316 1798136 cri.go:89] found id: ""
	I1216 07:41:10.438340 1798136 logs.go:282] 0 containers: []
	W1216 07:41:10.438348 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:41:10.438354 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:41:10.438409 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:41:10.464061 1798136 cri.go:89] found id: ""
	I1216 07:41:10.464086 1798136 logs.go:282] 0 containers: []
	W1216 07:41:10.464094 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:41:10.464103 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:41:10.464114 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:41:10.530097 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:41:10.530134 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:41:10.547750 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:41:10.547781 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:41:10.619676 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:41:10.619718 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:41:10.619732 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:41:10.651175 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:41:10.651213 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:41:13.184169 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:41:13.195721 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:41:13.195791 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:41:13.224738 1798136 cri.go:89] found id: ""
	I1216 07:41:13.224761 1798136 logs.go:282] 0 containers: []
	W1216 07:41:13.224770 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:41:13.224776 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:41:13.224834 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:41:13.255654 1798136 cri.go:89] found id: ""
	I1216 07:41:13.255734 1798136 logs.go:282] 0 containers: []
	W1216 07:41:13.255757 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:41:13.255791 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:41:13.255870 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:41:13.281065 1798136 cri.go:89] found id: ""
	I1216 07:41:13.281093 1798136 logs.go:282] 0 containers: []
	W1216 07:41:13.281113 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:41:13.281121 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:41:13.281184 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:41:13.307363 1798136 cri.go:89] found id: ""
	I1216 07:41:13.307389 1798136 logs.go:282] 0 containers: []
	W1216 07:41:13.307398 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:41:13.307403 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:41:13.307461 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:41:13.337342 1798136 cri.go:89] found id: ""
	I1216 07:41:13.337369 1798136 logs.go:282] 0 containers: []
	W1216 07:41:13.337378 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:41:13.337384 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:41:13.337455 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:41:13.367477 1798136 cri.go:89] found id: ""
	I1216 07:41:13.367499 1798136 logs.go:282] 0 containers: []
	W1216 07:41:13.367509 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:41:13.367516 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:41:13.367573 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:41:13.395269 1798136 cri.go:89] found id: ""
	I1216 07:41:13.395295 1798136 logs.go:282] 0 containers: []
	W1216 07:41:13.395304 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:41:13.395310 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:41:13.395374 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:41:13.420508 1798136 cri.go:89] found id: ""
	I1216 07:41:13.420537 1798136 logs.go:282] 0 containers: []
	W1216 07:41:13.420545 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:41:13.420554 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:41:13.420565 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:41:13.489667 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:41:13.489690 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:41:13.489703 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:41:13.521258 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:41:13.521293 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 07:41:13.550461 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:41:13.550541 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:41:13.623226 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:41:13.623267 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:41:16.140798 1798136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:41:16.151183 1798136 kubeadm.go:602] duration metric: took 4m3.389138185s to restartPrimaryControlPlane
	W1216 07:41:16.151283 1798136 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 07:41:16.151375 1798136 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 07:41:16.576427 1798136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 07:41:16.589578 1798136 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 07:41:16.599789 1798136 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 07:41:16.599857 1798136 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 07:41:16.607918 1798136 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 07:41:16.607940 1798136 kubeadm.go:158] found existing configuration files:
	
	I1216 07:41:16.608003 1798136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 07:41:16.616887 1798136 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 07:41:16.616971 1798136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 07:41:16.624337 1798136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 07:41:16.632218 1798136 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 07:41:16.632348 1798136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 07:41:16.640156 1798136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 07:41:16.648237 1798136 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 07:41:16.648328 1798136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 07:41:16.655886 1798136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 07:41:16.663654 1798136 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 07:41:16.663742 1798136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 07:41:16.671840 1798136 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 07:41:16.710279 1798136 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 07:41:16.710344 1798136 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 07:41:16.781809 1798136 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 07:41:16.781883 1798136 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 07:41:16.781975 1798136 kubeadm.go:319] OS: Linux
	I1216 07:41:16.782077 1798136 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 07:41:16.782156 1798136 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 07:41:16.782231 1798136 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 07:41:16.782310 1798136 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 07:41:16.782394 1798136 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 07:41:16.782499 1798136 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 07:41:16.782588 1798136 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 07:41:16.782671 1798136 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 07:41:16.782750 1798136 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 07:41:16.852153 1798136 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 07:41:16.852349 1798136 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 07:41:16.852535 1798136 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 07:41:16.863671 1798136 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 07:41:16.867919 1798136 out.go:252]   - Generating certificates and keys ...
	I1216 07:41:16.868016 1798136 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 07:41:16.868084 1798136 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 07:41:16.868174 1798136 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 07:41:16.868246 1798136 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 07:41:16.868321 1798136 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 07:41:16.868879 1798136 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 07:41:16.869715 1798136 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 07:41:16.870558 1798136 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 07:41:16.870977 1798136 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 07:41:16.871873 1798136 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 07:41:16.872736 1798136 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 07:41:16.872826 1798136 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 07:41:17.196256 1798136 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 07:41:17.498844 1798136 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 07:41:17.768228 1798136 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 07:41:17.963709 1798136 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 07:41:18.152996 1798136 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 07:41:18.153748 1798136 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 07:41:18.156330 1798136 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 07:41:18.159843 1798136 out.go:252]   - Booting up control plane ...
	I1216 07:41:18.160016 1798136 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 07:41:18.160148 1798136 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 07:41:18.160273 1798136 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 07:41:18.176015 1798136 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 07:41:18.176141 1798136 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 07:41:18.183982 1798136 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 07:41:18.184543 1798136 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 07:41:18.184617 1798136 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 07:41:18.318845 1798136 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 07:41:18.318973 1798136 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 07:45:18.320076 1798136 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001193351s
	I1216 07:45:18.320115 1798136 kubeadm.go:319] 
	I1216 07:45:18.320179 1798136 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 07:45:18.320232 1798136 kubeadm.go:319] 	- The kubelet is not running
	I1216 07:45:18.320356 1798136 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 07:45:18.320369 1798136 kubeadm.go:319] 
	I1216 07:45:18.320516 1798136 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 07:45:18.320565 1798136 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 07:45:18.320600 1798136 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 07:45:18.320608 1798136 kubeadm.go:319] 
	I1216 07:45:18.325958 1798136 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 07:45:18.326491 1798136 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 07:45:18.326621 1798136 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 07:45:18.326926 1798136 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 07:45:18.326936 1798136 kubeadm.go:319] 
	I1216 07:45:18.327017 1798136 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1216 07:45:18.327166 1798136 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001193351s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001193351s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 07:45:18.327284 1798136 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 07:45:18.754392 1798136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 07:45:18.772231 1798136 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 07:45:18.772294 1798136 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 07:45:18.783699 1798136 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 07:45:18.783726 1798136 kubeadm.go:158] found existing configuration files:
	
	I1216 07:45:18.783783 1798136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 07:45:18.793812 1798136 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 07:45:18.793878 1798136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 07:45:18.802961 1798136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 07:45:18.812328 1798136 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 07:45:18.812395 1798136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 07:45:18.821315 1798136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 07:45:18.832420 1798136 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 07:45:18.832563 1798136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 07:45:18.842333 1798136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 07:45:18.852116 1798136 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 07:45:18.852185 1798136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 07:45:18.860819 1798136 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 07:45:18.915421 1798136 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 07:45:18.915483 1798136 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 07:45:19.051011 1798136 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 07:45:19.051083 1798136 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 07:45:19.051119 1798136 kubeadm.go:319] OS: Linux
	I1216 07:45:19.051164 1798136 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 07:45:19.051212 1798136 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 07:45:19.051259 1798136 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 07:45:19.051307 1798136 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 07:45:19.051355 1798136 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 07:45:19.051403 1798136 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 07:45:19.051448 1798136 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 07:45:19.051496 1798136 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 07:45:19.051542 1798136 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 07:45:19.142528 1798136 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 07:45:19.143196 1798136 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 07:45:19.143363 1798136 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 07:45:19.156893 1798136 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 07:45:19.160253 1798136 out.go:252]   - Generating certificates and keys ...
	I1216 07:45:19.160353 1798136 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 07:45:19.160426 1798136 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 07:45:19.160538 1798136 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 07:45:19.160602 1798136 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 07:45:19.160681 1798136 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 07:45:19.160739 1798136 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 07:45:19.160806 1798136 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 07:45:19.160871 1798136 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 07:45:19.161229 1798136 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 07:45:19.161786 1798136 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 07:45:19.162232 1798136 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 07:45:19.162458 1798136 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 07:45:19.411128 1798136 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 07:45:19.797290 1798136 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 07:45:20.144939 1798136 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 07:45:20.919415 1798136 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 07:45:20.976842 1798136 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 07:45:20.979794 1798136 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 07:45:20.985045 1798136 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 07:45:20.988139 1798136 out.go:252]   - Booting up control plane ...
	I1216 07:45:20.988249 1798136 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 07:45:20.996983 1798136 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 07:45:20.998524 1798136 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 07:45:21.020391 1798136 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 07:45:21.020722 1798136 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 07:45:21.030518 1798136 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 07:45:21.030630 1798136 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 07:45:21.030673 1798136 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 07:45:21.225745 1798136 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 07:45:21.225867 1798136 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 07:49:21.224996 1798136 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001058164s
	I1216 07:49:21.225031 1798136 kubeadm.go:319] 
	I1216 07:49:21.225086 1798136 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 07:49:21.225118 1798136 kubeadm.go:319] 	- The kubelet is not running
	I1216 07:49:21.225217 1798136 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 07:49:21.225223 1798136 kubeadm.go:319] 
	I1216 07:49:21.225331 1798136 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 07:49:21.225362 1798136 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 07:49:21.225391 1798136 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 07:49:21.225396 1798136 kubeadm.go:319] 
	I1216 07:49:21.229704 1798136 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 07:49:21.230175 1798136 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 07:49:21.230300 1798136 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 07:49:21.230562 1798136 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1216 07:49:21.230567 1798136 kubeadm.go:319] 
	I1216 07:49:21.230688 1798136 kubeadm.go:403] duration metric: took 12m8.57664064s to StartCluster
	I1216 07:49:21.230724 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:49:21.230783 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:49:21.230855 1798136 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 07:49:21.264535 1798136 cri.go:89] found id: ""
	I1216 07:49:21.264565 1798136 logs.go:282] 0 containers: []
	W1216 07:49:21.264574 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:49:21.264581 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:49:21.264645 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:49:21.293869 1798136 cri.go:89] found id: ""
	I1216 07:49:21.293894 1798136 logs.go:282] 0 containers: []
	W1216 07:49:21.293909 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:49:21.293916 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:49:21.294021 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:49:21.324235 1798136 cri.go:89] found id: ""
	I1216 07:49:21.324262 1798136 logs.go:282] 0 containers: []
	W1216 07:49:21.324271 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:49:21.324277 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:49:21.324337 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:49:21.359108 1798136 cri.go:89] found id: ""
	I1216 07:49:21.359135 1798136 logs.go:282] 0 containers: []
	W1216 07:49:21.359144 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:49:21.359150 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:49:21.359207 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:49:21.385348 1798136 cri.go:89] found id: ""
	I1216 07:49:21.385376 1798136 logs.go:282] 0 containers: []
	W1216 07:49:21.385385 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:49:21.385392 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:49:21.385452 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:49:21.426454 1798136 cri.go:89] found id: ""
	I1216 07:49:21.426480 1798136 logs.go:282] 0 containers: []
	W1216 07:49:21.426489 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:49:21.426497 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:49:21.426552 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:49:21.459394 1798136 cri.go:89] found id: ""
	I1216 07:49:21.459419 1798136 logs.go:282] 0 containers: []
	W1216 07:49:21.459428 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:49:21.459433 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:49:21.459493 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:49:21.487807 1798136 cri.go:89] found id: ""
	I1216 07:49:21.487834 1798136 logs.go:282] 0 containers: []
	W1216 07:49:21.487844 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:49:21.487853 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:49:21.487864 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:49:21.574940 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:49:21.575045 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:49:21.600817 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:49:21.600888 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:49:21.677070 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:49:21.677145 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:49:21.677182 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:49:21.712066 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:49:21.712102 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1216 07:49:21.744382 1798136 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001058164s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1216 07:49:21.744434 1798136 out.go:285] * 
	* 
	W1216 07:49:21.744524 1798136 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001058164s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001058164s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 07:49:21.744543 1798136 out.go:285] * 
	* 
	W1216 07:49:21.746666 1798136 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 07:49:21.753587 1798136 out.go:203] 
	W1216 07:49:21.756292 1798136 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001058164s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001058164s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 07:49:21.756344 1798136 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 07:49:21.756373 1798136 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 07:49:21.759414 1798136 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-linux-arm64 start -p kubernetes-upgrade-530870 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-530870 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-530870 version --output=json: exit status 1 (112.790494ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "33",
	    "gitVersion": "v1.33.2",
	    "gitCommit": "a57b6f7709f6c2722b92f07b8b4c48210a51fc40",
	    "gitTreeState": "clean",
	    "buildDate": "2025-06-17T18:41:31Z",
	    "goVersion": "go1.24.4",
	    "compiler": "gc",
	    "platform": "linux/arm64"
	  },
	  "kustomizeVersion": "v5.6.0"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.76.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-16 07:49:22.462787195 +0000 UTC m=+5789.394590537
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect kubernetes-upgrade-530870
helpers_test.go:244: (dbg) docker inspect kubernetes-upgrade-530870:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6ba14bf88ee1c721708c698b5cf033508506e885761c0e778ef5cfe835acae44",
	        "Created": "2025-12-16T07:36:29.976230852Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1798312,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T07:37:01.318071959Z",
	            "FinishedAt": "2025-12-16T07:37:00.026220444Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/6ba14bf88ee1c721708c698b5cf033508506e885761c0e778ef5cfe835acae44/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6ba14bf88ee1c721708c698b5cf033508506e885761c0e778ef5cfe835acae44/hostname",
	        "HostsPath": "/var/lib/docker/containers/6ba14bf88ee1c721708c698b5cf033508506e885761c0e778ef5cfe835acae44/hosts",
	        "LogPath": "/var/lib/docker/containers/6ba14bf88ee1c721708c698b5cf033508506e885761c0e778ef5cfe835acae44/6ba14bf88ee1c721708c698b5cf033508506e885761c0e778ef5cfe835acae44-json.log",
	        "Name": "/kubernetes-upgrade-530870",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-530870:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-530870",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6ba14bf88ee1c721708c698b5cf033508506e885761c0e778ef5cfe835acae44",
	                "LowerDir": "/var/lib/docker/overlay2/6e9e0c5f9b2756f95bd27d85747fdc1856e5688b811a4fd3c679b7f96f0bd850-init/diff:/var/lib/docker/overlay2/bf9e5e3f04a34ae52d17b5e81aeacb3854428b2bda7b4fcb7e1d86558db759ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e9e0c5f9b2756f95bd27d85747fdc1856e5688b811a4fd3c679b7f96f0bd850/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e9e0c5f9b2756f95bd27d85747fdc1856e5688b811a4fd3c679b7f96f0bd850/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e9e0c5f9b2756f95bd27d85747fdc1856e5688b811a4fd3c679b7f96f0bd850/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-530870",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-530870/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-530870",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-530870",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-530870",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eb5909a05403c4c357eee374907b0367a3ff822795d1284e25a1a4b832e17b86",
	            "SandboxKey": "/var/run/docker/netns/eb5909a05403",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34510"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34511"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34514"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34512"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34513"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-530870": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:40:54:9c:c4:74",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "97f335cf3aa240383885c578f966b522aef5500d4d1e602cd250cb320411b21b",
	                    "EndpointID": "133ff58a37ae1e20bdfab44a7fc76f5400f8c58a4109bf9776d72b897da1552b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-530870",
	                        "6ba14bf88ee1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-530870 -n kubernetes-upgrade-530870
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-530870 -n kubernetes-upgrade-530870: exit status 2 (391.12166ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-530870 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p kubernetes-upgrade-530870 logs -n 25: (1.403453458s)
helpers_test.go:261: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │    PROFILE     │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kindnet-829423 sudo systemctl status kubelet --all --full --no-pager                                                                  │ kindnet-829423 │ jenkins │ v1.37.0 │ 16 Dec 25 07:48 UTC │ 16 Dec 25 07:48 UTC │
	│ ssh     │ -p kindnet-829423 sudo systemctl cat kubelet --no-pager                                                                                  │ kindnet-829423 │ jenkins │ v1.37.0 │ 16 Dec 25 07:48 UTC │ 16 Dec 25 07:48 UTC │
	│ ssh     │ -p kindnet-829423 sudo journalctl -xeu kubelet --all --full --no-pager                                                                   │ kindnet-829423 │ jenkins │ v1.37.0 │ 16 Dec 25 07:48 UTC │ 16 Dec 25 07:48 UTC │
	│ ssh     │ -p kindnet-829423 sudo cat /etc/kubernetes/kubelet.conf                                                                                  │ kindnet-829423 │ jenkins │ v1.37.0 │ 16 Dec 25 07:48 UTC │ 16 Dec 25 07:48 UTC │
	│ ssh     │ -p kindnet-829423 sudo cat /var/lib/kubelet/config.yaml                                                                                  │ kindnet-829423 │ jenkins │ v1.37.0 │ 16 Dec 25 07:48 UTC │ 16 Dec 25 07:48 UTC │
	│ ssh     │ -p kindnet-829423 sudo systemctl status docker --all --full --no-pager                                                                   │ kindnet-829423 │ jenkins │ v1.37.0 │ 16 Dec 25 07:48 UTC │                     │
	│ ssh     │ -p kindnet-829423 sudo systemctl cat docker --no-pager                                                                                   │ kindnet-829423 │ jenkins │ v1.37.0 │ 16 Dec 25 07:48 UTC │ 16 Dec 25 07:48 UTC │
	│ ssh     │ -p kindnet-829423 sudo cat /etc/docker/daemon.json                                                                                       │ kindnet-829423 │ jenkins │ v1.37.0 │ 16 Dec 25 07:48 UTC │                     │
	│ ssh     │ -p kindnet-829423 sudo docker system info                                                                                                │ kindnet-829423 │ jenkins │ v1.37.0 │ 16 Dec 25 07:48 UTC │                     │
	│ ssh     │ -p kindnet-829423 sudo systemctl status cri-docker --all --full --no-pager                                                               │ kindnet-829423 │ jenkins │ v1.37.0 │ 16 Dec 25 07:48 UTC │                     │
	│ ssh     │ -p kindnet-829423 sudo systemctl cat cri-docker --no-pager                                                                               │ kindnet-829423 │ jenkins │ v1.37.0 │ 16 Dec 25 07:48 UTC │ 16 Dec 25 07:48 UTC │
	│ ssh     │ -p kindnet-829423 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                          │ kindnet-829423 │ jenkins │ v1.37.0 │ 16 Dec 25 07:48 UTC │                     │
	│ ssh     │ -p kindnet-829423 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                    │ kindnet-829423 │ jenkins │ v1.37.0 │ 16 Dec 25 07:48 UTC │ 16 Dec 25 07:48 UTC │
	│ ssh     │ -p kindnet-829423 sudo cri-dockerd --version                                                                                             │ kindnet-829423 │ jenkins │ v1.37.0 │ 16 Dec 25 07:48 UTC │ 16 Dec 25 07:48 UTC │
	│ ssh     │ -p kindnet-829423 sudo systemctl status containerd --all --full --no-pager                                                               │ kindnet-829423 │ jenkins │ v1.37.0 │ 16 Dec 25 07:48 UTC │                     │
	│ ssh     │ -p kindnet-829423 sudo systemctl cat containerd --no-pager                                                                               │ kindnet-829423 │ jenkins │ v1.37.0 │ 16 Dec 25 07:48 UTC │ 16 Dec 25 07:48 UTC │
	│ ssh     │ -p kindnet-829423 sudo cat /lib/systemd/system/containerd.service                                                                        │ kindnet-829423 │ jenkins │ v1.37.0 │ 16 Dec 25 07:48 UTC │ 16 Dec 25 07:48 UTC │
	│ ssh     │ -p kindnet-829423 sudo cat /etc/containerd/config.toml                                                                                   │ kindnet-829423 │ jenkins │ v1.37.0 │ 16 Dec 25 07:48 UTC │ 16 Dec 25 07:48 UTC │
	│ ssh     │ -p kindnet-829423 sudo containerd config dump                                                                                            │ kindnet-829423 │ jenkins │ v1.37.0 │ 16 Dec 25 07:48 UTC │ 16 Dec 25 07:48 UTC │
	│ ssh     │ -p kindnet-829423 sudo systemctl status crio --all --full --no-pager                                                                     │ kindnet-829423 │ jenkins │ v1.37.0 │ 16 Dec 25 07:48 UTC │ 16 Dec 25 07:48 UTC │
	│ ssh     │ -p kindnet-829423 sudo systemctl cat crio --no-pager                                                                                     │ kindnet-829423 │ jenkins │ v1.37.0 │ 16 Dec 25 07:48 UTC │ 16 Dec 25 07:48 UTC │
	│ ssh     │ -p kindnet-829423 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                           │ kindnet-829423 │ jenkins │ v1.37.0 │ 16 Dec 25 07:48 UTC │ 16 Dec 25 07:48 UTC │
	│ ssh     │ -p kindnet-829423 sudo crio config                                                                                                       │ kindnet-829423 │ jenkins │ v1.37.0 │ 16 Dec 25 07:48 UTC │ 16 Dec 25 07:48 UTC │
	│ delete  │ -p kindnet-829423                                                                                                                        │ kindnet-829423 │ jenkins │ v1.37.0 │ 16 Dec 25 07:48 UTC │ 16 Dec 25 07:48 UTC │
	│ start   │ -p flannel-829423 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio │ flannel-829423 │ jenkins │ v1.37.0 │ 16 Dec 25 07:48 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 07:48:49
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 07:48:49.151324 1838583 out.go:360] Setting OutFile to fd 1 ...
	I1216 07:48:49.151441 1838583 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 07:48:49.151447 1838583 out.go:374] Setting ErrFile to fd 2...
	I1216 07:48:49.151451 1838583 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 07:48:49.151799 1838583 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 07:48:49.152285 1838583 out.go:368] Setting JSON to false
	I1216 07:48:49.153867 1838583 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":37881,"bootTime":1765833449,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1216 07:48:49.154029 1838583 start.go:143] virtualization:  
	I1216 07:48:49.157562 1838583 out.go:179] * [flannel-829423] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 07:48:49.160254 1838583 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 07:48:49.160346 1838583 notify.go:221] Checking for updates...
	I1216 07:48:49.165114 1838583 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 07:48:49.168989 1838583 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 07:48:49.172264 1838583 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube
	I1216 07:48:49.175493 1838583 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 07:48:49.180508 1838583 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 07:48:49.184322 1838583 config.go:182] Loaded profile config "kubernetes-upgrade-530870": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 07:48:49.184532 1838583 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 07:48:49.218210 1838583 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 07:48:49.218352 1838583 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 07:48:49.287903 1838583 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 07:48:49.278547 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/
usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 07:48:49.288015 1838583 docker.go:319] overlay module found
	I1216 07:48:49.291347 1838583 out.go:179] * Using the docker driver based on user configuration
	I1216 07:48:49.294389 1838583 start.go:309] selected driver: docker
	I1216 07:48:49.294413 1838583 start.go:927] validating driver "docker" against <nil>
	I1216 07:48:49.294428 1838583 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 07:48:49.295155 1838583 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 07:48:49.348577 1838583 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 07:48:49.339147215 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 07:48:49.348742 1838583 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 07:48:49.348959 1838583 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 07:48:49.351993 1838583 out.go:179] * Using Docker driver with root privileges
	I1216 07:48:49.354947 1838583 cni.go:84] Creating CNI manager for "flannel"
	I1216 07:48:49.354967 1838583 start_flags.go:336] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1216 07:48:49.355035 1838583 start.go:353] cluster config:
	{Name:flannel-829423 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:flannel-829423 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 07:48:49.358257 1838583 out.go:179] * Starting "flannel-829423" primary control-plane node in "flannel-829423" cluster
	I1216 07:48:49.361127 1838583 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 07:48:49.364115 1838583 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 07:48:49.366988 1838583 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 07:48:49.367037 1838583 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1216 07:48:49.367063 1838583 cache.go:65] Caching tarball of preloaded images
	I1216 07:48:49.367087 1838583 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 07:48:49.367156 1838583 preload.go:238] Found /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 07:48:49.367167 1838583 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 07:48:49.367271 1838583 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/flannel-829423/config.json ...
	I1216 07:48:49.367287 1838583 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/flannel-829423/config.json: {Name:mk63803bb03f03540caf5ced6b5ebd8501d3a705 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:48:49.387044 1838583 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 07:48:49.387068 1838583 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 07:48:49.387089 1838583 cache.go:243] Successfully downloaded all kic artifacts
	I1216 07:48:49.387123 1838583 start.go:360] acquireMachinesLock for flannel-829423: {Name:mkcf3498604663fcff1474eeb6bd4db97fc0fcc8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 07:48:49.387243 1838583 start.go:364] duration metric: took 99.135µs to acquireMachinesLock for "flannel-829423"
	I1216 07:48:49.387275 1838583 start.go:93] Provisioning new machine with config: &{Name:flannel-829423 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:flannel-829423 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 07:48:49.387352 1838583 start.go:125] createHost starting for "" (driver="docker")
	I1216 07:48:49.390787 1838583 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 07:48:49.391031 1838583 start.go:159] libmachine.API.Create for "flannel-829423" (driver="docker")
	I1216 07:48:49.391069 1838583 client.go:173] LocalClient.Create starting
	I1216 07:48:49.391148 1838583 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem
	I1216 07:48:49.391189 1838583 main.go:143] libmachine: Decoding PEM data...
	I1216 07:48:49.391212 1838583 main.go:143] libmachine: Parsing certificate...
	I1216 07:48:49.391271 1838583 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem
	I1216 07:48:49.391294 1838583 main.go:143] libmachine: Decoding PEM data...
	I1216 07:48:49.391310 1838583 main.go:143] libmachine: Parsing certificate...
	I1216 07:48:49.391676 1838583 cli_runner.go:164] Run: docker network inspect flannel-829423 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 07:48:49.407368 1838583 cli_runner.go:211] docker network inspect flannel-829423 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 07:48:49.407459 1838583 network_create.go:284] running [docker network inspect flannel-829423] to gather additional debugging logs...
	I1216 07:48:49.407481 1838583 cli_runner.go:164] Run: docker network inspect flannel-829423
	W1216 07:48:49.423363 1838583 cli_runner.go:211] docker network inspect flannel-829423 returned with exit code 1
	I1216 07:48:49.423398 1838583 network_create.go:287] error running [docker network inspect flannel-829423]: docker network inspect flannel-829423: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network flannel-829423 not found
	I1216 07:48:49.423422 1838583 network_create.go:289] output of [docker network inspect flannel-829423]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network flannel-829423 not found
	
	** /stderr **
	I1216 07:48:49.423535 1838583 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 07:48:49.440976 1838583 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-34c8049a560a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:55:f3:91:6e:93} reservation:<nil>}
	I1216 07:48:49.441404 1838583 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-32157e3696e8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d2:88:e8:87:0c:95} reservation:<nil>}
	I1216 07:48:49.441839 1838583 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-902a0abe49a2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:58:55:4e:11:6b} reservation:<nil>}
	I1216 07:48:49.442148 1838583 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-97f335cf3aa2 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:b2:a7:ba:2b:7b:52} reservation:<nil>}
	I1216 07:48:49.442633 1838583 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019f1ee0}
	I1216 07:48:49.442660 1838583 network_create.go:124] attempt to create docker network flannel-829423 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1216 07:48:49.442715 1838583 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=flannel-829423 flannel-829423
	I1216 07:48:49.506701 1838583 network_create.go:108] docker network flannel-829423 192.168.85.0/24 created
	I1216 07:48:49.506736 1838583 kic.go:121] calculated static IP "192.168.85.2" for the "flannel-829423" container
	I1216 07:48:49.506819 1838583 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 07:48:49.522450 1838583 cli_runner.go:164] Run: docker volume create flannel-829423 --label name.minikube.sigs.k8s.io=flannel-829423 --label created_by.minikube.sigs.k8s.io=true
	I1216 07:48:49.540743 1838583 oci.go:103] Successfully created a docker volume flannel-829423
	I1216 07:48:49.540836 1838583 cli_runner.go:164] Run: docker run --rm --name flannel-829423-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=flannel-829423 --entrypoint /usr/bin/test -v flannel-829423:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1216 07:48:50.110689 1838583 oci.go:107] Successfully prepared a docker volume flannel-829423
	I1216 07:48:50.110755 1838583 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 07:48:50.110765 1838583 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 07:48:50.110850 1838583 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v flannel-829423:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 07:48:54.120073 1838583 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v flannel-829423:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (4.009181028s)
	I1216 07:48:54.120114 1838583 kic.go:203] duration metric: took 4.009343736s to extract preloaded images to volume ...
	W1216 07:48:54.120307 1838583 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1216 07:48:54.120428 1838583 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 07:48:54.177764 1838583 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname flannel-829423 --name flannel-829423 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=flannel-829423 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=flannel-829423 --network flannel-829423 --ip 192.168.85.2 --volume flannel-829423:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1216 07:48:54.512604 1838583 cli_runner.go:164] Run: docker container inspect flannel-829423 --format={{.State.Running}}
	I1216 07:48:54.533060 1838583 cli_runner.go:164] Run: docker container inspect flannel-829423 --format={{.State.Status}}
	I1216 07:48:54.557828 1838583 cli_runner.go:164] Run: docker exec flannel-829423 stat /var/lib/dpkg/alternatives/iptables
	I1216 07:48:54.610019 1838583 oci.go:144] the created container "flannel-829423" has a running status.
	I1216 07:48:54.610045 1838583 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/flannel-829423/id_rsa...
	I1216 07:48:54.657839 1838583 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/flannel-829423/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 07:48:54.682613 1838583 cli_runner.go:164] Run: docker container inspect flannel-829423 --format={{.State.Status}}
	I1216 07:48:54.706019 1838583 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 07:48:54.706043 1838583 kic_runner.go:114] Args: [docker exec --privileged flannel-829423 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 07:48:54.757562 1838583 cli_runner.go:164] Run: docker container inspect flannel-829423 --format={{.State.Status}}
	I1216 07:48:54.777827 1838583 machine.go:94] provisionDockerMachine start ...
	I1216 07:48:54.777936 1838583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-829423
	I1216 07:48:54.806933 1838583 main.go:143] libmachine: Using SSH client type: native
	I1216 07:48:54.807822 1838583 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34540 <nil> <nil>}
	I1216 07:48:54.807839 1838583 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 07:48:54.808724 1838583 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1216 07:48:57.940238 1838583 main.go:143] libmachine: SSH cmd err, output: <nil>: flannel-829423
	
	I1216 07:48:57.940283 1838583 ubuntu.go:182] provisioning hostname "flannel-829423"
	I1216 07:48:57.940349 1838583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-829423
	I1216 07:48:57.958050 1838583 main.go:143] libmachine: Using SSH client type: native
	I1216 07:48:57.958368 1838583 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34540 <nil> <nil>}
	I1216 07:48:57.958387 1838583 main.go:143] libmachine: About to run SSH command:
	sudo hostname flannel-829423 && echo "flannel-829423" | sudo tee /etc/hostname
	I1216 07:48:58.102310 1838583 main.go:143] libmachine: SSH cmd err, output: <nil>: flannel-829423
	
	I1216 07:48:58.102462 1838583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-829423
	I1216 07:48:58.120674 1838583 main.go:143] libmachine: Using SSH client type: native
	I1216 07:48:58.120984 1838583 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34540 <nil> <nil>}
	I1216 07:48:58.121006 1838583 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-829423' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-829423/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-829423' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 07:48:58.268916 1838583 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 07:48:58.268946 1838583 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-1596013/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-1596013/.minikube}
	I1216 07:48:58.268977 1838583 ubuntu.go:190] setting up certificates
	I1216 07:48:58.268993 1838583 provision.go:84] configureAuth start
	I1216 07:48:58.269062 1838583 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" flannel-829423
	I1216 07:48:58.285651 1838583 provision.go:143] copyHostCerts
	I1216 07:48:58.285726 1838583 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem, removing ...
	I1216 07:48:58.285741 1838583 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 07:48:58.285820 1838583 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem (1078 bytes)
	I1216 07:48:58.285914 1838583 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem, removing ...
	I1216 07:48:58.285922 1838583 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 07:48:58.285948 1838583 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem (1123 bytes)
	I1216 07:48:58.286002 1838583 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem, removing ...
	I1216 07:48:58.286013 1838583 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 07:48:58.286037 1838583 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem (1675 bytes)
	I1216 07:48:58.286090 1838583 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem org=jenkins.flannel-829423 san=[127.0.0.1 192.168.85.2 flannel-829423 localhost minikube]
	I1216 07:48:59.078052 1838583 provision.go:177] copyRemoteCerts
	I1216 07:48:59.078115 1838583 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 07:48:59.078161 1838583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-829423
	I1216 07:48:59.094584 1838583 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34540 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/flannel-829423/id_rsa Username:docker}
	I1216 07:48:59.188384 1838583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 07:48:59.206387 1838583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1216 07:48:59.223764 1838583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 07:48:59.241563 1838583 provision.go:87] duration metric: took 972.541011ms to configureAuth
	I1216 07:48:59.241592 1838583 ubuntu.go:206] setting minikube options for container-runtime
	I1216 07:48:59.241828 1838583 config.go:182] Loaded profile config "flannel-829423": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:48:59.241939 1838583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-829423
	I1216 07:48:59.261928 1838583 main.go:143] libmachine: Using SSH client type: native
	I1216 07:48:59.262234 1838583 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34540 <nil> <nil>}
	I1216 07:48:59.262253 1838583 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 07:48:59.569243 1838583 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 07:48:59.569274 1838583 machine.go:97] duration metric: took 4.791423091s to provisionDockerMachine
	I1216 07:48:59.569284 1838583 client.go:176] duration metric: took 10.17820515s to LocalClient.Create
	I1216 07:48:59.569298 1838583 start.go:167] duration metric: took 10.178268207s to libmachine.API.Create "flannel-829423"
	I1216 07:48:59.569305 1838583 start.go:293] postStartSetup for "flannel-829423" (driver="docker")
	I1216 07:48:59.569315 1838583 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 07:48:59.569380 1838583 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 07:48:59.569428 1838583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-829423
	I1216 07:48:59.586718 1838583 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34540 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/flannel-829423/id_rsa Username:docker}
	I1216 07:48:59.686650 1838583 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 07:48:59.690494 1838583 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 07:48:59.690581 1838583 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 07:48:59.690609 1838583 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/addons for local assets ...
	I1216 07:48:59.690696 1838583 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/files for local assets ...
	I1216 07:48:59.690834 1838583 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> 15992552.pem in /etc/ssl/certs
	I1216 07:48:59.690988 1838583 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 07:48:59.699145 1838583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 07:48:59.716713 1838583 start.go:296] duration metric: took 147.393913ms for postStartSetup
	I1216 07:48:59.717095 1838583 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" flannel-829423
	I1216 07:48:59.733354 1838583 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/flannel-829423/config.json ...
	I1216 07:48:59.733645 1838583 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 07:48:59.733695 1838583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-829423
	I1216 07:48:59.749717 1838583 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34540 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/flannel-829423/id_rsa Username:docker}
	I1216 07:48:59.841502 1838583 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 07:48:59.846108 1838583 start.go:128] duration metric: took 10.458740332s to createHost
	I1216 07:48:59.846136 1838583 start.go:83] releasing machines lock for "flannel-829423", held for 10.458880683s
	I1216 07:48:59.846220 1838583 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" flannel-829423
	I1216 07:48:59.864022 1838583 ssh_runner.go:195] Run: cat /version.json
	I1216 07:48:59.864057 1838583 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 07:48:59.864076 1838583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-829423
	I1216 07:48:59.864113 1838583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-829423
	I1216 07:48:59.887876 1838583 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34540 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/flannel-829423/id_rsa Username:docker}
	I1216 07:48:59.899275 1838583 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34540 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/flannel-829423/id_rsa Username:docker}
	I1216 07:48:59.988187 1838583 ssh_runner.go:195] Run: systemctl --version
	I1216 07:49:00.187420 1838583 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 07:49:00.283299 1838583 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 07:49:00.291184 1838583 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 07:49:00.291272 1838583 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 07:49:00.351518 1838583 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1216 07:49:00.351610 1838583 start.go:496] detecting cgroup driver to use...
	I1216 07:49:00.351681 1838583 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 07:49:00.351778 1838583 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 07:49:00.376562 1838583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 07:49:00.392594 1838583 docker.go:218] disabling cri-docker service (if available) ...
	I1216 07:49:00.392721 1838583 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 07:49:00.413552 1838583 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 07:49:00.448063 1838583 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 07:49:00.580449 1838583 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 07:49:00.721039 1838583 docker.go:234] disabling docker service ...
	I1216 07:49:00.721229 1838583 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 07:49:00.743491 1838583 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 07:49:00.759313 1838583 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 07:49:00.889579 1838583 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 07:49:01.001866 1838583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 07:49:01.017038 1838583 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 07:49:01.033612 1838583 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 07:49:01.033708 1838583 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:49:01.042721 1838583 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 07:49:01.042791 1838583 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:49:01.051920 1838583 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:49:01.060878 1838583 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:49:01.069656 1838583 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 07:49:01.078132 1838583 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:49:01.086920 1838583 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:49:01.100696 1838583 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:49:01.110178 1838583 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 07:49:01.118036 1838583 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 07:49:01.125869 1838583 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 07:49:01.265380 1838583 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 07:49:01.427838 1838583 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 07:49:01.427943 1838583 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 07:49:01.431839 1838583 start.go:564] Will wait 60s for crictl version
	I1216 07:49:01.431905 1838583 ssh_runner.go:195] Run: which crictl
	I1216 07:49:01.436333 1838583 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 07:49:01.465399 1838583 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 07:49:01.465553 1838583 ssh_runner.go:195] Run: crio --version
	I1216 07:49:01.495397 1838583 ssh_runner.go:195] Run: crio --version
	I1216 07:49:01.526907 1838583 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 07:49:01.529681 1838583 cli_runner.go:164] Run: docker network inspect flannel-829423 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 07:49:01.550286 1838583 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1216 07:49:01.554369 1838583 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 07:49:01.564629 1838583 kubeadm.go:884] updating cluster {Name:flannel-829423 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:flannel-829423 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 07:49:01.564754 1838583 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 07:49:01.564817 1838583 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 07:49:01.597138 1838583 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 07:49:01.597161 1838583 crio.go:433] Images already preloaded, skipping extraction
	I1216 07:49:01.597221 1838583 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 07:49:01.622142 1838583 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 07:49:01.622165 1838583 cache_images.go:86] Images are preloaded, skipping loading
	I1216 07:49:01.622173 1838583 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1216 07:49:01.622308 1838583 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=flannel-829423 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:flannel-829423 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I1216 07:49:01.622396 1838583 ssh_runner.go:195] Run: crio config
	I1216 07:49:01.682133 1838583 cni.go:84] Creating CNI manager for "flannel"
	I1216 07:49:01.682164 1838583 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 07:49:01.682190 1838583 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-829423 NodeName:flannel-829423 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 07:49:01.682326 1838583 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-829423"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 07:49:01.682411 1838583 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 07:49:01.690410 1838583 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 07:49:01.690490 1838583 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 07:49:01.698432 1838583 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I1216 07:49:01.712077 1838583 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 07:49:01.725646 1838583 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1216 07:49:01.739652 1838583 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1216 07:49:01.743444 1838583 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 07:49:01.753682 1838583 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 07:49:01.880726 1838583 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 07:49:01.900950 1838583 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/flannel-829423 for IP: 192.168.85.2
	I1216 07:49:01.901013 1838583 certs.go:195] generating shared ca certs ...
	I1216 07:49:01.901045 1838583 certs.go:227] acquiring lock for ca certs: {Name:mkbf72d2e438185e2867d262e148d82e5455cccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:49:01.901224 1838583 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key
	I1216 07:49:01.901337 1838583 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key
	I1216 07:49:01.901365 1838583 certs.go:257] generating profile certs ...
	I1216 07:49:01.901453 1838583 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/flannel-829423/client.key
	I1216 07:49:01.901485 1838583 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/flannel-829423/client.crt with IP's: []
	I1216 07:49:02.235222 1838583 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/flannel-829423/client.crt ...
	I1216 07:49:02.235256 1838583 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/flannel-829423/client.crt: {Name:mk5c518b666441edb04952417c07cbba288acf30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:49:02.235466 1838583 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/flannel-829423/client.key ...
	I1216 07:49:02.235481 1838583 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/flannel-829423/client.key: {Name:mk608fdae884d234889ff8f890ced69ad56cdb23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:49:02.235566 1838583 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/flannel-829423/apiserver.key.c5758998
	I1216 07:49:02.235586 1838583 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/flannel-829423/apiserver.crt.c5758998 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1216 07:49:02.510512 1838583 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/flannel-829423/apiserver.crt.c5758998 ...
	I1216 07:49:02.510545 1838583 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/flannel-829423/apiserver.crt.c5758998: {Name:mk9ef8be4ac1ba8443d358c57b7bf08ad7b71b86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:49:02.510757 1838583 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/flannel-829423/apiserver.key.c5758998 ...
	I1216 07:49:02.510776 1838583 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/flannel-829423/apiserver.key.c5758998: {Name:mk1fdbef80afb1774c9d72f5536efddbfd15e430 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:49:02.510865 1838583 certs.go:382] copying /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/flannel-829423/apiserver.crt.c5758998 -> /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/flannel-829423/apiserver.crt
	I1216 07:49:02.510947 1838583 certs.go:386] copying /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/flannel-829423/apiserver.key.c5758998 -> /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/flannel-829423/apiserver.key
	I1216 07:49:02.511014 1838583 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/flannel-829423/proxy-client.key
	I1216 07:49:02.511033 1838583 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/flannel-829423/proxy-client.crt with IP's: []
	I1216 07:49:02.632419 1838583 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/flannel-829423/proxy-client.crt ...
	I1216 07:49:02.632450 1838583 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/flannel-829423/proxy-client.crt: {Name:mk333931ba3bdf7a3738d00837d5455035519481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:49:02.632623 1838583 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/flannel-829423/proxy-client.key ...
	I1216 07:49:02.632643 1838583 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/flannel-829423/proxy-client.key: {Name:mkec125d11b09474e158722fd9f9c69215384b6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:49:02.632834 1838583 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem (1338 bytes)
	W1216 07:49:02.632883 1838583 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255_empty.pem, impossibly tiny 0 bytes
	I1216 07:49:02.632897 1838583 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 07:49:02.632927 1838583 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem (1078 bytes)
	I1216 07:49:02.632956 1838583 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem (1123 bytes)
	I1216 07:49:02.632983 1838583 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem (1675 bytes)
	I1216 07:49:02.633030 1838583 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 07:49:02.633679 1838583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 07:49:02.657235 1838583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 07:49:02.679691 1838583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 07:49:02.701004 1838583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 07:49:02.720889 1838583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/flannel-829423/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 07:49:02.740332 1838583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/flannel-829423/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 07:49:02.761836 1838583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/flannel-829423/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 07:49:02.783351 1838583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/flannel-829423/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 07:49:02.802420 1838583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem --> /usr/share/ca-certificates/1599255.pem (1338 bytes)
	I1216 07:49:02.821291 1838583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /usr/share/ca-certificates/15992552.pem (1708 bytes)
	I1216 07:49:02.840116 1838583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 07:49:02.859399 1838583 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 07:49:02.872980 1838583 ssh_runner.go:195] Run: openssl version
	I1216 07:49:02.879361 1838583 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15992552.pem
	I1216 07:49:02.887756 1838583 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15992552.pem /etc/ssl/certs/15992552.pem
	I1216 07:49:02.895753 1838583 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15992552.pem
	I1216 07:49:02.899834 1838583 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 06:24 /usr/share/ca-certificates/15992552.pem
	I1216 07:49:02.899940 1838583 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15992552.pem
	I1216 07:49:02.941246 1838583 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 07:49:02.949306 1838583 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/15992552.pem /etc/ssl/certs/3ec20f2e.0
	I1216 07:49:02.957076 1838583 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:49:02.964908 1838583 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 07:49:02.972526 1838583 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:49:02.976386 1838583 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:49:02.976456 1838583 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:49:03.018207 1838583 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 07:49:03.026082 1838583 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 07:49:03.033846 1838583 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1599255.pem
	I1216 07:49:03.041483 1838583 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1599255.pem /etc/ssl/certs/1599255.pem
	I1216 07:49:03.049554 1838583 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1599255.pem
	I1216 07:49:03.053839 1838583 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 06:24 /usr/share/ca-certificates/1599255.pem
	I1216 07:49:03.053946 1838583 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1599255.pem
	I1216 07:49:03.095571 1838583 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 07:49:03.104152 1838583 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1599255.pem /etc/ssl/certs/51391683.0
	I1216 07:49:03.112054 1838583 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 07:49:03.115768 1838583 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 07:49:03.115833 1838583 kubeadm.go:401] StartCluster: {Name:flannel-829423 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:flannel-829423 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 07:49:03.115907 1838583 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 07:49:03.115966 1838583 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 07:49:03.144073 1838583 cri.go:89] found id: ""
	I1216 07:49:03.144153 1838583 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 07:49:03.152219 1838583 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 07:49:03.160270 1838583 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 07:49:03.160378 1838583 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 07:49:03.168779 1838583 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 07:49:03.168797 1838583 kubeadm.go:158] found existing configuration files:
	
	I1216 07:49:03.168853 1838583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 07:49:03.176860 1838583 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 07:49:03.176969 1838583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 07:49:03.184636 1838583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 07:49:03.192688 1838583 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 07:49:03.192804 1838583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 07:49:03.200610 1838583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 07:49:03.208550 1838583 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 07:49:03.208687 1838583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 07:49:03.216199 1838583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 07:49:03.224549 1838583 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 07:49:03.224666 1838583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 07:49:03.233272 1838583 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 07:49:03.278908 1838583 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 07:49:03.279071 1838583 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 07:49:03.301027 1838583 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 07:49:03.301186 1838583 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 07:49:03.301276 1838583 kubeadm.go:319] OS: Linux
	I1216 07:49:03.301353 1838583 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 07:49:03.301459 1838583 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 07:49:03.301540 1838583 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 07:49:03.301621 1838583 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 07:49:03.301708 1838583 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 07:49:03.301788 1838583 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 07:49:03.301865 1838583 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 07:49:03.301947 1838583 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 07:49:03.302029 1838583 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 07:49:03.370266 1838583 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 07:49:03.370438 1838583 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 07:49:03.370581 1838583 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 07:49:03.380936 1838583 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 07:49:03.387190 1838583 out.go:252]   - Generating certificates and keys ...
	I1216 07:49:03.387315 1838583 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 07:49:03.387398 1838583 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 07:49:03.692199 1838583 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 07:49:03.979098 1838583 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 07:49:04.516754 1838583 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 07:49:04.728642 1838583 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 07:49:05.649531 1838583 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 07:49:05.649856 1838583 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [flannel-829423 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1216 07:49:05.768603 1838583 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 07:49:05.771786 1838583 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [flannel-829423 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1216 07:49:06.724960 1838583 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 07:49:06.823021 1838583 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 07:49:07.009109 1838583 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 07:49:07.009552 1838583 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 07:49:07.178825 1838583 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 07:49:07.430793 1838583 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 07:49:08.623574 1838583 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 07:49:08.694625 1838583 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 07:49:09.211650 1838583 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 07:49:09.212355 1838583 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 07:49:09.215136 1838583 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 07:49:09.218525 1838583 out.go:252]   - Booting up control plane ...
	I1216 07:49:09.218629 1838583 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 07:49:09.218723 1838583 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 07:49:09.218791 1838583 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 07:49:09.235891 1838583 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 07:49:09.236032 1838583 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 07:49:09.244155 1838583 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 07:49:09.244922 1838583 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 07:49:09.245122 1838583 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 07:49:09.381051 1838583 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 07:49:09.381178 1838583 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 07:49:10.390631 1838583 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.009726525s
	I1216 07:49:10.394299 1838583 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 07:49:10.394410 1838583 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1216 07:49:10.394694 1838583 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 07:49:10.394794 1838583 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 07:49:14.295418 1838583 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.900639296s
	I1216 07:49:15.999415 1838583 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.605181075s
	I1216 07:49:16.900639 1838583 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501242292s
	I1216 07:49:16.942772 1838583 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 07:49:16.960770 1838583 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 07:49:16.983396 1838583 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 07:49:16.983604 1838583 kubeadm.go:319] [mark-control-plane] Marking the node flannel-829423 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 07:49:16.997147 1838583 kubeadm.go:319] [bootstrap-token] Using token: 5w5dqv.gr1dcgn3cdpucbf4
	I1216 07:49:17.000035 1838583 out.go:252]   - Configuring RBAC rules ...
	I1216 07:49:17.000166 1838583 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 07:49:17.006102 1838583 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 07:49:17.015370 1838583 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 07:49:17.023563 1838583 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 07:49:17.027820 1838583 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 07:49:17.031773 1838583 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 07:49:17.303853 1838583 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 07:49:17.767506 1838583 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 07:49:18.303191 1838583 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 07:49:18.304617 1838583 kubeadm.go:319] 
	I1216 07:49:18.304697 1838583 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 07:49:18.304723 1838583 kubeadm.go:319] 
	I1216 07:49:18.304805 1838583 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 07:49:18.304816 1838583 kubeadm.go:319] 
	I1216 07:49:18.304842 1838583 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 07:49:18.304905 1838583 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 07:49:18.304958 1838583 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 07:49:18.304966 1838583 kubeadm.go:319] 
	I1216 07:49:18.305021 1838583 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 07:49:18.305027 1838583 kubeadm.go:319] 
	I1216 07:49:18.305075 1838583 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 07:49:18.305083 1838583 kubeadm.go:319] 
	I1216 07:49:18.305135 1838583 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 07:49:18.305221 1838583 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 07:49:18.305300 1838583 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 07:49:18.305308 1838583 kubeadm.go:319] 
	I1216 07:49:18.305393 1838583 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 07:49:18.305473 1838583 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 07:49:18.305481 1838583 kubeadm.go:319] 
	I1216 07:49:18.305565 1838583 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 5w5dqv.gr1dcgn3cdpucbf4 \
	I1216 07:49:18.305673 1838583 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:98b5016a2f19357bbe076308b3bd53072319152b21d9550fc4ffc6d799a06c05 \
	I1216 07:49:18.305698 1838583 kubeadm.go:319] 	--control-plane 
	I1216 07:49:18.305706 1838583 kubeadm.go:319] 
	I1216 07:49:18.305791 1838583 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 07:49:18.305798 1838583 kubeadm.go:319] 
	I1216 07:49:18.305880 1838583 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 5w5dqv.gr1dcgn3cdpucbf4 \
	I1216 07:49:18.305989 1838583 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:98b5016a2f19357bbe076308b3bd53072319152b21d9550fc4ffc6d799a06c05 
	I1216 07:49:18.311178 1838583 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1216 07:49:18.311408 1838583 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 07:49:18.311520 1838583 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 07:49:18.311541 1838583 cni.go:84] Creating CNI manager for "flannel"
	I1216 07:49:18.314860 1838583 out.go:179] * Configuring Flannel (Container Networking Interface) ...
	I1216 07:49:18.317941 1838583 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1216 07:49:18.322164 1838583 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1216 07:49:18.322189 1838583 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4415 bytes)
	I1216 07:49:18.337040 1838583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 07:49:18.791657 1838583 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 07:49:18.791804 1838583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 07:49:18.791967 1838583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-829423 minikube.k8s.io/updated_at=2025_12_16T07_49_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f minikube.k8s.io/name=flannel-829423 minikube.k8s.io/primary=true
	I1216 07:49:18.988330 1838583 ops.go:34] apiserver oom_adj: -16
	I1216 07:49:18.988437 1838583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 07:49:21.224996 1798136 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001058164s
	I1216 07:49:21.225031 1798136 kubeadm.go:319] 
	I1216 07:49:21.225086 1798136 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 07:49:21.225118 1798136 kubeadm.go:319] 	- The kubelet is not running
	I1216 07:49:21.225217 1798136 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 07:49:21.225223 1798136 kubeadm.go:319] 
	I1216 07:49:21.225331 1798136 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 07:49:21.225362 1798136 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 07:49:21.225391 1798136 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 07:49:21.225396 1798136 kubeadm.go:319] 
	I1216 07:49:21.229704 1798136 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 07:49:21.230175 1798136 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 07:49:21.230300 1798136 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 07:49:21.230562 1798136 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1216 07:49:21.230567 1798136 kubeadm.go:319] 
	I1216 07:49:21.230688 1798136 kubeadm.go:403] duration metric: took 12m8.57664064s to StartCluster
	I1216 07:49:21.230724 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 07:49:21.230783 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 07:49:21.230855 1798136 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 07:49:21.264535 1798136 cri.go:89] found id: ""
	I1216 07:49:21.264565 1798136 logs.go:282] 0 containers: []
	W1216 07:49:21.264574 1798136 logs.go:284] No container was found matching "kube-apiserver"
	I1216 07:49:21.264581 1798136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 07:49:21.264645 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 07:49:21.293869 1798136 cri.go:89] found id: ""
	I1216 07:49:21.293894 1798136 logs.go:282] 0 containers: []
	W1216 07:49:21.293909 1798136 logs.go:284] No container was found matching "etcd"
	I1216 07:49:21.293916 1798136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 07:49:21.294021 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 07:49:21.324235 1798136 cri.go:89] found id: ""
	I1216 07:49:21.324262 1798136 logs.go:282] 0 containers: []
	W1216 07:49:21.324271 1798136 logs.go:284] No container was found matching "coredns"
	I1216 07:49:21.324277 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 07:49:21.324337 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 07:49:21.359108 1798136 cri.go:89] found id: ""
	I1216 07:49:21.359135 1798136 logs.go:282] 0 containers: []
	W1216 07:49:21.359144 1798136 logs.go:284] No container was found matching "kube-scheduler"
	I1216 07:49:21.359150 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 07:49:21.359207 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 07:49:21.385348 1798136 cri.go:89] found id: ""
	I1216 07:49:21.385376 1798136 logs.go:282] 0 containers: []
	W1216 07:49:21.385385 1798136 logs.go:284] No container was found matching "kube-proxy"
	I1216 07:49:21.385392 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 07:49:21.385452 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 07:49:21.426454 1798136 cri.go:89] found id: ""
	I1216 07:49:21.426480 1798136 logs.go:282] 0 containers: []
	W1216 07:49:21.426489 1798136 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 07:49:21.426497 1798136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 07:49:21.426552 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 07:49:21.459394 1798136 cri.go:89] found id: ""
	I1216 07:49:21.459419 1798136 logs.go:282] 0 containers: []
	W1216 07:49:21.459428 1798136 logs.go:284] No container was found matching "kindnet"
	I1216 07:49:21.459433 1798136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 07:49:21.459493 1798136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 07:49:21.487807 1798136 cri.go:89] found id: ""
	I1216 07:49:21.487834 1798136 logs.go:282] 0 containers: []
	W1216 07:49:21.487844 1798136 logs.go:284] No container was found matching "storage-provisioner"
	I1216 07:49:21.487853 1798136 logs.go:123] Gathering logs for kubelet ...
	I1216 07:49:21.487864 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 07:49:21.574940 1798136 logs.go:123] Gathering logs for dmesg ...
	I1216 07:49:21.575045 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 07:49:21.600817 1798136 logs.go:123] Gathering logs for describe nodes ...
	I1216 07:49:21.600888 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 07:49:21.677070 1798136 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 07:49:21.677145 1798136 logs.go:123] Gathering logs for CRI-O ...
	I1216 07:49:21.677182 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 07:49:21.712066 1798136 logs.go:123] Gathering logs for container status ...
	I1216 07:49:21.712102 1798136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1216 07:49:21.744382 1798136 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001058164s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1216 07:49:21.744434 1798136 out.go:285] * 
	W1216 07:49:21.744524 1798136 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001058164s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 07:49:21.744543 1798136 out.go:285] * 
	W1216 07:49:21.746666 1798136 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 07:49:21.753587 1798136 out.go:203] 
	W1216 07:49:21.756292 1798136 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001058164s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 07:49:21.756344 1798136 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 07:49:21.756373 1798136 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 07:49:21.759414 1798136 out.go:203] 
	
	
	==> CRI-O <==
	Dec 16 07:37:07 kubernetes-upgrade-530870 crio[615]: time="2025-12-16T07:37:07.393478047Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 16 07:37:07 kubernetes-upgrade-530870 crio[615]: time="2025-12-16T07:37:07.393516095Z" level=info msg="Starting seccomp notifier watcher"
	Dec 16 07:37:07 kubernetes-upgrade-530870 crio[615]: time="2025-12-16T07:37:07.393562996Z" level=info msg="Create NRI interface"
	Dec 16 07:37:07 kubernetes-upgrade-530870 crio[615]: time="2025-12-16T07:37:07.393680437Z" level=info msg="built-in NRI default validator is disabled"
	Dec 16 07:37:07 kubernetes-upgrade-530870 crio[615]: time="2025-12-16T07:37:07.393689988Z" level=info msg="runtime interface created"
	Dec 16 07:37:07 kubernetes-upgrade-530870 crio[615]: time="2025-12-16T07:37:07.393700901Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 16 07:37:07 kubernetes-upgrade-530870 crio[615]: time="2025-12-16T07:37:07.393707079Z" level=info msg="runtime interface starting up..."
	Dec 16 07:37:07 kubernetes-upgrade-530870 crio[615]: time="2025-12-16T07:37:07.393713307Z" level=info msg="starting plugins..."
	Dec 16 07:37:07 kubernetes-upgrade-530870 crio[615]: time="2025-12-16T07:37:07.393726854Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 16 07:37:07 kubernetes-upgrade-530870 crio[615]: time="2025-12-16T07:37:07.393794071Z" level=info msg="No systemd watchdog enabled"
	Dec 16 07:37:07 kubernetes-upgrade-530870 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 16 07:41:16 kubernetes-upgrade-530870 crio[615]: time="2025-12-16T07:41:16.85632546Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=fd6833d6-c765-4feb-8561-e7d5406ee187 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 07:41:16 kubernetes-upgrade-530870 crio[615]: time="2025-12-16T07:41:16.857448492Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=0fda054b-99c1-41a4-b4de-7d5d2831293d name=/runtime.v1.ImageService/ImageStatus
	Dec 16 07:41:16 kubernetes-upgrade-530870 crio[615]: time="2025-12-16T07:41:16.858310746Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=34032e3a-cd2c-47bc-94e8-3fb0bfb5eff7 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 07:41:16 kubernetes-upgrade-530870 crio[615]: time="2025-12-16T07:41:16.861043166Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=3f7669c2-af37-4aaa-9fa0-6844785ce477 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 07:41:16 kubernetes-upgrade-530870 crio[615]: time="2025-12-16T07:41:16.86164291Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=66e9dccb-d157-4ecc-b709-4f3368ae98a8 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 07:41:16 kubernetes-upgrade-530870 crio[615]: time="2025-12-16T07:41:16.862142452Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=7d0595a0-65bd-4635-999f-c27622f02287 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 07:41:16 kubernetes-upgrade-530870 crio[615]: time="2025-12-16T07:41:16.862746947Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=bccf8022-eb2a-4eab-828d-4b03c1be6e4d name=/runtime.v1.ImageService/ImageStatus
	Dec 16 07:45:19 kubernetes-upgrade-530870 crio[615]: time="2025-12-16T07:45:19.149943707Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=94b4a01f-8dbc-421d-a2d1-eb77b4e6bf08 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 07:45:19 kubernetes-upgrade-530870 crio[615]: time="2025-12-16T07:45:19.150743371Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=4471a1c2-5e96-416b-9d59-7d3b66a5c2e6 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 07:45:19 kubernetes-upgrade-530870 crio[615]: time="2025-12-16T07:45:19.151609521Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=0af04545-3d1f-4f17-8813-35ba8210c690 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 07:45:19 kubernetes-upgrade-530870 crio[615]: time="2025-12-16T07:45:19.15273764Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=d07a566e-ade3-4778-9da0-2a06c89292c9 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 07:45:19 kubernetes-upgrade-530870 crio[615]: time="2025-12-16T07:45:19.153289006Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=9c3f81ba-e595-4b8f-9acd-e54e7549d3c9 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 07:45:19 kubernetes-upgrade-530870 crio[615]: time="2025-12-16T07:45:19.153823157Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=47f14500-fea7-469c-bd01-2747f05547df name=/runtime.v1.ImageService/ImageStatus
	Dec 16 07:45:19 kubernetes-upgrade-530870 crio[615]: time="2025-12-16T07:45:19.154324848Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=e3440f53-9f4b-4be3-ae61-d0a110b038d1 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 07:12] overlayfs: idmapped layers are currently not supported
	[Dec16 07:17] overlayfs: idmapped layers are currently not supported
	[ +33.022046] overlayfs: idmapped layers are currently not supported
	[Dec16 07:18] overlayfs: idmapped layers are currently not supported
	[Dec16 07:19] overlayfs: idmapped layers are currently not supported
	[Dec16 07:20] overlayfs: idmapped layers are currently not supported
	[Dec16 07:22] overlayfs: idmapped layers are currently not supported
	[Dec16 07:23] overlayfs: idmapped layers are currently not supported
	[  +6.617945] overlayfs: idmapped layers are currently not supported
	[ +47.625208] overlayfs: idmapped layers are currently not supported
	[Dec16 07:24] overlayfs: idmapped layers are currently not supported
	[Dec16 07:25] overlayfs: idmapped layers are currently not supported
	[ +25.916657] overlayfs: idmapped layers are currently not supported
	[Dec16 07:26] overlayfs: idmapped layers are currently not supported
	[Dec16 07:27] overlayfs: idmapped layers are currently not supported
	[Dec16 07:29] overlayfs: idmapped layers are currently not supported
	[Dec16 07:31] overlayfs: idmapped layers are currently not supported
	[Dec16 07:32] overlayfs: idmapped layers are currently not supported
	[ +24.023346] overlayfs: idmapped layers are currently not supported
	[Dec16 07:33] overlayfs: idmapped layers are currently not supported
	[Dec16 07:36] overlayfs: idmapped layers are currently not supported
	[Dec16 07:43] overlayfs: idmapped layers are currently not supported
	[Dec16 07:45] overlayfs: idmapped layers are currently not supported
	[Dec16 07:47] overlayfs: idmapped layers are currently not supported
	[Dec16 07:49] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 07:49:24 up 10:31,  0 user,  load average: 1.68, 1.65, 1.75
	Linux kubernetes-upgrade-530870 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 07:49:21 kubernetes-upgrade-530870 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 07:49:22 kubernetes-upgrade-530870 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 962.
	Dec 16 07:49:22 kubernetes-upgrade-530870 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 07:49:22 kubernetes-upgrade-530870 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 07:49:22 kubernetes-upgrade-530870 kubelet[12226]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 07:49:22 kubernetes-upgrade-530870 kubelet[12226]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 07:49:22 kubernetes-upgrade-530870 kubelet[12226]: E1216 07:49:22.280139   12226 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 07:49:22 kubernetes-upgrade-530870 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 07:49:22 kubernetes-upgrade-530870 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 07:49:23 kubernetes-upgrade-530870 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 963.
	Dec 16 07:49:23 kubernetes-upgrade-530870 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 07:49:23 kubernetes-upgrade-530870 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 07:49:23 kubernetes-upgrade-530870 kubelet[12247]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 07:49:23 kubernetes-upgrade-530870 kubelet[12247]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 07:49:23 kubernetes-upgrade-530870 kubelet[12247]: E1216 07:49:23.279157   12247 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 07:49:23 kubernetes-upgrade-530870 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 07:49:23 kubernetes-upgrade-530870 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 07:49:23 kubernetes-upgrade-530870 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 964.
	Dec 16 07:49:23 kubernetes-upgrade-530870 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 07:49:23 kubernetes-upgrade-530870 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 07:49:24 kubernetes-upgrade-530870 kubelet[12327]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 07:49:24 kubernetes-upgrade-530870 kubelet[12327]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 07:49:24 kubernetes-upgrade-530870 kubelet[12327]: E1216 07:49:24.073920   12327 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 07:49:24 kubernetes-upgrade-530870 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 07:49:24 kubernetes-upgrade-530870 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-530870 -n kubernetes-upgrade-530870
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-530870 -n kubernetes-upgrade-530870: exit status 2 (496.072892ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "kubernetes-upgrade-530870" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "kubernetes-upgrade-530870" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-530870
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-530870: (2.494216605s)
--- FAIL: TestKubernetesUpgrade (784.45s)

                                                
                                    
x
+
TestPause/serial/Pause (7.12s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-375517 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-375517 --alsologtostderr -v=5: exit status 80 (1.996840312s)

                                                
                                                
-- stdout --
	* Pausing node pause-375517 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 07:45:17.013946 1824843 out.go:360] Setting OutFile to fd 1 ...
	I1216 07:45:17.015315 1824843 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 07:45:17.015333 1824843 out.go:374] Setting ErrFile to fd 2...
	I1216 07:45:17.015340 1824843 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 07:45:17.015656 1824843 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 07:45:17.015943 1824843 out.go:368] Setting JSON to false
	I1216 07:45:17.016024 1824843 mustload.go:66] Loading cluster: pause-375517
	I1216 07:45:17.016542 1824843 config.go:182] Loaded profile config "pause-375517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:45:17.017046 1824843 cli_runner.go:164] Run: docker container inspect pause-375517 --format={{.State.Status}}
	I1216 07:45:17.034392 1824843 host.go:66] Checking if "pause-375517" exists ...
	I1216 07:45:17.034725 1824843 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 07:45:17.087015 1824843 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-16 07:45:17.076898553 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 07:45:17.087702 1824843 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765846775-22141/minikube-v1.37.0-1765846775-22141-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765846775-22141-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-375517 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1216 07:45:17.090762 1824843 out.go:179] * Pausing node pause-375517 ... 
	I1216 07:45:17.094580 1824843 host.go:66] Checking if "pause-375517" exists ...
	I1216 07:45:17.094940 1824843 ssh_runner.go:195] Run: systemctl --version
	I1216 07:45:17.095000 1824843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-375517
	I1216 07:45:17.112172 1824843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34525 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/pause-375517/id_rsa Username:docker}
	I1216 07:45:17.217430 1824843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 07:45:17.231303 1824843 pause.go:52] kubelet running: true
	I1216 07:45:17.231376 1824843 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 07:45:17.455891 1824843 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 07:45:17.456026 1824843 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 07:45:17.522279 1824843 cri.go:89] found id: "54832596223e3df0d47d266141e625d237f37c3dab9cb50e79333493b8497255"
	I1216 07:45:17.522304 1824843 cri.go:89] found id: "4f3eca46adabd7173fb72cdd4291f5038016ba3595d2347359993260c0559ca1"
	I1216 07:45:17.522310 1824843 cri.go:89] found id: "6f71b54f1ba1c65db4deac9b06dfb0d64fce05e51d14ce123aa0eb55cc857a8a"
	I1216 07:45:17.522314 1824843 cri.go:89] found id: "546b094189b407e12be0ffc6c3a12d42b955bed31b5fc2d5b8d7bf597ce67cb2"
	I1216 07:45:17.522317 1824843 cri.go:89] found id: "e10fa0b3d14a2100bd2a09bb4836e34a4acbec6c8e118a3af3a2450450d66d20"
	I1216 07:45:17.522321 1824843 cri.go:89] found id: "8a86aee515e658b93fc7d10adcd6891a6e4eba14453f9747b4a80a7a361f9266"
	I1216 07:45:17.522324 1824843 cri.go:89] found id: "ca7b3191cff4e83a60f9aac1be7a6876b73f4e2f8e10d231f2aa6d86edad73c5"
	I1216 07:45:17.522327 1824843 cri.go:89] found id: "e6e642813259586fc8af749f164ed270ad6375c625923994b9cd7781a9f840fc"
	I1216 07:45:17.522330 1824843 cri.go:89] found id: "6f2801c86394d7c2f2aa5eb96605703ff62f0e8b300e513300a2ca2ebe69abd9"
	I1216 07:45:17.522357 1824843 cri.go:89] found id: "76751aa856240697939bca0a05e7d09ba45b9f61fb6c295aaabb8abcd8159e58"
	I1216 07:45:17.522366 1824843 cri.go:89] found id: "618d84d84bc9252e472478ee25b67f5df79c96360a711e03237ac217587aba39"
	I1216 07:45:17.522369 1824843 cri.go:89] found id: "517fe82d993ee6f5d824214ffe285d9f264597a2c18d89d85e7f4597b139afb3"
	I1216 07:45:17.522372 1824843 cri.go:89] found id: "2463d298843b3b6c96a2c0d31eeb6e502860ca63ff1ab141388cacd59870583f"
	I1216 07:45:17.522376 1824843 cri.go:89] found id: "a1cf99c5084156ca31d04538b1e6cde3c65b6503d4a9ab7f17a0b4226501fd7a"
	I1216 07:45:17.522379 1824843 cri.go:89] found id: ""
	I1216 07:45:17.522441 1824843 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 07:45:17.533581 1824843 retry.go:31] will retry after 316.644863ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T07:45:17Z" level=error msg="open /run/runc: no such file or directory"
	I1216 07:45:17.851261 1824843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 07:45:17.867441 1824843 pause.go:52] kubelet running: false
	I1216 07:45:17.867537 1824843 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 07:45:18.034881 1824843 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 07:45:18.034973 1824843 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 07:45:18.101754 1824843 cri.go:89] found id: "54832596223e3df0d47d266141e625d237f37c3dab9cb50e79333493b8497255"
	I1216 07:45:18.101780 1824843 cri.go:89] found id: "4f3eca46adabd7173fb72cdd4291f5038016ba3595d2347359993260c0559ca1"
	I1216 07:45:18.101786 1824843 cri.go:89] found id: "6f71b54f1ba1c65db4deac9b06dfb0d64fce05e51d14ce123aa0eb55cc857a8a"
	I1216 07:45:18.101789 1824843 cri.go:89] found id: "546b094189b407e12be0ffc6c3a12d42b955bed31b5fc2d5b8d7bf597ce67cb2"
	I1216 07:45:18.101793 1824843 cri.go:89] found id: "e10fa0b3d14a2100bd2a09bb4836e34a4acbec6c8e118a3af3a2450450d66d20"
	I1216 07:45:18.101796 1824843 cri.go:89] found id: "8a86aee515e658b93fc7d10adcd6891a6e4eba14453f9747b4a80a7a361f9266"
	I1216 07:45:18.101800 1824843 cri.go:89] found id: "ca7b3191cff4e83a60f9aac1be7a6876b73f4e2f8e10d231f2aa6d86edad73c5"
	I1216 07:45:18.101808 1824843 cri.go:89] found id: "e6e642813259586fc8af749f164ed270ad6375c625923994b9cd7781a9f840fc"
	I1216 07:45:18.101812 1824843 cri.go:89] found id: "6f2801c86394d7c2f2aa5eb96605703ff62f0e8b300e513300a2ca2ebe69abd9"
	I1216 07:45:18.101818 1824843 cri.go:89] found id: "76751aa856240697939bca0a05e7d09ba45b9f61fb6c295aaabb8abcd8159e58"
	I1216 07:45:18.101821 1824843 cri.go:89] found id: "618d84d84bc9252e472478ee25b67f5df79c96360a711e03237ac217587aba39"
	I1216 07:45:18.101824 1824843 cri.go:89] found id: "517fe82d993ee6f5d824214ffe285d9f264597a2c18d89d85e7f4597b139afb3"
	I1216 07:45:18.101828 1824843 cri.go:89] found id: "2463d298843b3b6c96a2c0d31eeb6e502860ca63ff1ab141388cacd59870583f"
	I1216 07:45:18.101831 1824843 cri.go:89] found id: "a1cf99c5084156ca31d04538b1e6cde3c65b6503d4a9ab7f17a0b4226501fd7a"
	I1216 07:45:18.101835 1824843 cri.go:89] found id: ""
	I1216 07:45:18.101889 1824843 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 07:45:18.113711 1824843 retry.go:31] will retry after 507.965306ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T07:45:18Z" level=error msg="open /run/runc: no such file or directory"
	I1216 07:45:18.622377 1824843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 07:45:18.635709 1824843 pause.go:52] kubelet running: false
	I1216 07:45:18.635797 1824843 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 07:45:18.825249 1824843 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 07:45:18.825340 1824843 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 07:45:18.918045 1824843 cri.go:89] found id: "54832596223e3df0d47d266141e625d237f37c3dab9cb50e79333493b8497255"
	I1216 07:45:18.918070 1824843 cri.go:89] found id: "4f3eca46adabd7173fb72cdd4291f5038016ba3595d2347359993260c0559ca1"
	I1216 07:45:18.918076 1824843 cri.go:89] found id: "6f71b54f1ba1c65db4deac9b06dfb0d64fce05e51d14ce123aa0eb55cc857a8a"
	I1216 07:45:18.918080 1824843 cri.go:89] found id: "546b094189b407e12be0ffc6c3a12d42b955bed31b5fc2d5b8d7bf597ce67cb2"
	I1216 07:45:18.918083 1824843 cri.go:89] found id: "e10fa0b3d14a2100bd2a09bb4836e34a4acbec6c8e118a3af3a2450450d66d20"
	I1216 07:45:18.918086 1824843 cri.go:89] found id: "8a86aee515e658b93fc7d10adcd6891a6e4eba14453f9747b4a80a7a361f9266"
	I1216 07:45:18.918090 1824843 cri.go:89] found id: "ca7b3191cff4e83a60f9aac1be7a6876b73f4e2f8e10d231f2aa6d86edad73c5"
	I1216 07:45:18.918093 1824843 cri.go:89] found id: "e6e642813259586fc8af749f164ed270ad6375c625923994b9cd7781a9f840fc"
	I1216 07:45:18.918096 1824843 cri.go:89] found id: "6f2801c86394d7c2f2aa5eb96605703ff62f0e8b300e513300a2ca2ebe69abd9"
	I1216 07:45:18.918103 1824843 cri.go:89] found id: "76751aa856240697939bca0a05e7d09ba45b9f61fb6c295aaabb8abcd8159e58"
	I1216 07:45:18.918106 1824843 cri.go:89] found id: "618d84d84bc9252e472478ee25b67f5df79c96360a711e03237ac217587aba39"
	I1216 07:45:18.918109 1824843 cri.go:89] found id: "517fe82d993ee6f5d824214ffe285d9f264597a2c18d89d85e7f4597b139afb3"
	I1216 07:45:18.918112 1824843 cri.go:89] found id: "2463d298843b3b6c96a2c0d31eeb6e502860ca63ff1ab141388cacd59870583f"
	I1216 07:45:18.918117 1824843 cri.go:89] found id: "a1cf99c5084156ca31d04538b1e6cde3c65b6503d4a9ab7f17a0b4226501fd7a"
	I1216 07:45:18.918125 1824843 cri.go:89] found id: ""
	I1216 07:45:18.918175 1824843 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 07:45:18.933867 1824843 out.go:203] 
	W1216 07:45:18.936862 1824843 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T07:45:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T07:45:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 07:45:18.936895 1824843 out.go:285] * 
	* 
	W1216 07:45:18.945339 1824843 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 07:45:18.948415 1824843 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-375517 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-375517
helpers_test.go:244: (dbg) docker inspect pause-375517:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b52f36bed2a5ba3f7851cb2bbf0891bd89ccb7e8e3da43b736b13c4dd35073da",
	        "Created": "2025-12-16T07:43:34.345225442Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1820978,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T07:43:34.42540524Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/b52f36bed2a5ba3f7851cb2bbf0891bd89ccb7e8e3da43b736b13c4dd35073da/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b52f36bed2a5ba3f7851cb2bbf0891bd89ccb7e8e3da43b736b13c4dd35073da/hostname",
	        "HostsPath": "/var/lib/docker/containers/b52f36bed2a5ba3f7851cb2bbf0891bd89ccb7e8e3da43b736b13c4dd35073da/hosts",
	        "LogPath": "/var/lib/docker/containers/b52f36bed2a5ba3f7851cb2bbf0891bd89ccb7e8e3da43b736b13c4dd35073da/b52f36bed2a5ba3f7851cb2bbf0891bd89ccb7e8e3da43b736b13c4dd35073da-json.log",
	        "Name": "/pause-375517",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-375517:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-375517",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b52f36bed2a5ba3f7851cb2bbf0891bd89ccb7e8e3da43b736b13c4dd35073da",
	                "LowerDir": "/var/lib/docker/overlay2/ba2213d24f55e4130747919301670fa03d0628fdfe4d6f37aec9925637f1f495-init/diff:/var/lib/docker/overlay2/bf9e5e3f04a34ae52d17b5e81aeacb3854428b2bda7b4fcb7e1d86558db759ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ba2213d24f55e4130747919301670fa03d0628fdfe4d6f37aec9925637f1f495/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ba2213d24f55e4130747919301670fa03d0628fdfe4d6f37aec9925637f1f495/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ba2213d24f55e4130747919301670fa03d0628fdfe4d6f37aec9925637f1f495/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-375517",
	                "Source": "/var/lib/docker/volumes/pause-375517/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-375517",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-375517",
	                "name.minikube.sigs.k8s.io": "pause-375517",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c4f5f16529bf662fe426f566996e47397c249a5dfb73afc0d511a0f9b65a7854",
	            "SandboxKey": "/var/run/docker/netns/c4f5f16529bf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34525"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34526"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34529"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34527"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34528"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-375517": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:1e:52:e0:38:e2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1a88b60002a680226ebdf7a1156be8edd29c65d3e362af8ef7f90e358d4dde1f",
	                    "EndpointID": "162541a866c8f784955cd03eeb465d10bd2d910382b8e8a33e5be053be0aa407",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-375517",
	                        "b52f36bed2a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-375517 -n pause-375517
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-375517 -n pause-375517: exit status 2 (439.762964ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p pause-375517 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p pause-375517 logs -n 25: (1.783795601s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                           ARGS                                                                                                            │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-310359 --driver=docker  --container-runtime=crio                                                                                                                                                          │ NoKubernetes-310359       │ jenkins │ v1.37.0 │ 16 Dec 25 07:32 UTC │ 16 Dec 25 07:32 UTC │
	│ ssh     │ -p NoKubernetes-310359 sudo systemctl is-active --quiet service kubelet                                                                                                                                                   │ NoKubernetes-310359       │ jenkins │ v1.37.0 │ 16 Dec 25 07:32 UTC │                     │
	│ delete  │ -p NoKubernetes-310359                                                                                                                                                                                                    │ NoKubernetes-310359       │ jenkins │ v1.37.0 │ 16 Dec 25 07:32 UTC │ 16 Dec 25 07:32 UTC │
	│ start   │ -p cert-expiration-799129 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                    │ cert-expiration-799129    │ jenkins │ v1.37.0 │ 16 Dec 25 07:32 UTC │ 16 Dec 25 07:33 UTC │
	│ ssh     │ force-systemd-flag-583064 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                      │ force-systemd-flag-583064 │ jenkins │ v1.37.0 │ 16 Dec 25 07:32 UTC │ 16 Dec 25 07:32 UTC │
	│ delete  │ -p force-systemd-flag-583064                                                                                                                                                                                              │ force-systemd-flag-583064 │ jenkins │ v1.37.0 │ 16 Dec 25 07:32 UTC │ 16 Dec 25 07:32 UTC │
	│ start   │ -p cert-options-755102 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio │ cert-options-755102       │ jenkins │ v1.37.0 │ 16 Dec 25 07:32 UTC │ 16 Dec 25 07:33 UTC │
	│ ssh     │ cert-options-755102 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                               │ cert-options-755102       │ jenkins │ v1.37.0 │ 16 Dec 25 07:33 UTC │ 16 Dec 25 07:33 UTC │
	│ ssh     │ -p cert-options-755102 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                             │ cert-options-755102       │ jenkins │ v1.37.0 │ 16 Dec 25 07:33 UTC │ 16 Dec 25 07:33 UTC │
	│ delete  │ -p cert-options-755102                                                                                                                                                                                                    │ cert-options-755102       │ jenkins │ v1.37.0 │ 16 Dec 25 07:33 UTC │ 16 Dec 25 07:33 UTC │
	│ start   │ -p running-upgrade-033810 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                                                                                                      │ running-upgrade-033810    │ jenkins │ v1.35.0 │ 16 Dec 25 07:33 UTC │ 16 Dec 25 07:33 UTC │
	│ start   │ -p running-upgrade-033810 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                  │ running-upgrade-033810    │ jenkins │ v1.37.0 │ 16 Dec 25 07:33 UTC │ 16 Dec 25 07:38 UTC │
	│ start   │ -p cert-expiration-799129 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                 │ cert-expiration-799129    │ jenkins │ v1.37.0 │ 16 Dec 25 07:36 UTC │ 16 Dec 25 07:36 UTC │
	│ delete  │ -p cert-expiration-799129                                                                                                                                                                                                 │ cert-expiration-799129    │ jenkins │ v1.37.0 │ 16 Dec 25 07:36 UTC │ 16 Dec 25 07:36 UTC │
	│ start   │ -p kubernetes-upgrade-530870 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                  │ kubernetes-upgrade-530870 │ jenkins │ v1.37.0 │ 16 Dec 25 07:36 UTC │ 16 Dec 25 07:36 UTC │
	│ stop    │ -p kubernetes-upgrade-530870                                                                                                                                                                                              │ kubernetes-upgrade-530870 │ jenkins │ v1.37.0 │ 16 Dec 25 07:36 UTC │ 16 Dec 25 07:37 UTC │
	│ start   │ -p kubernetes-upgrade-530870 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                           │ kubernetes-upgrade-530870 │ jenkins │ v1.37.0 │ 16 Dec 25 07:37 UTC │                     │
	│ delete  │ -p running-upgrade-033810                                                                                                                                                                                                 │ running-upgrade-033810    │ jenkins │ v1.37.0 │ 16 Dec 25 07:38 UTC │ 16 Dec 25 07:38 UTC │
	│ start   │ -p stopped-upgrade-021632 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                                                                                                      │ stopped-upgrade-021632    │ jenkins │ v1.35.0 │ 16 Dec 25 07:38 UTC │ 16 Dec 25 07:38 UTC │
	│ stop    │ stopped-upgrade-021632 stop                                                                                                                                                                                               │ stopped-upgrade-021632    │ jenkins │ v1.35.0 │ 16 Dec 25 07:38 UTC │ 16 Dec 25 07:38 UTC │
	│ start   │ -p stopped-upgrade-021632 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                  │ stopped-upgrade-021632    │ jenkins │ v1.37.0 │ 16 Dec 25 07:38 UTC │ 16 Dec 25 07:43 UTC │
	│ delete  │ -p stopped-upgrade-021632                                                                                                                                                                                                 │ stopped-upgrade-021632    │ jenkins │ v1.37.0 │ 16 Dec 25 07:43 UTC │ 16 Dec 25 07:43 UTC │
	│ start   │ -p pause-375517 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                                                                                                 │ pause-375517              │ jenkins │ v1.37.0 │ 16 Dec 25 07:43 UTC │ 16 Dec 25 07:44 UTC │
	│ start   │ -p pause-375517 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                          │ pause-375517              │ jenkins │ v1.37.0 │ 16 Dec 25 07:44 UTC │ 16 Dec 25 07:45 UTC │
	│ pause   │ -p pause-375517 --alsologtostderr -v=5                                                                                                                                                                                    │ pause-375517              │ jenkins │ v1.37.0 │ 16 Dec 25 07:45 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 07:44:48
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 07:44:48.073375 1823529 out.go:360] Setting OutFile to fd 1 ...
	I1216 07:44:48.073585 1823529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 07:44:48.074004 1823529 out.go:374] Setting ErrFile to fd 2...
	I1216 07:44:48.074044 1823529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 07:44:48.074443 1823529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 07:44:48.076046 1823529 out.go:368] Setting JSON to false
	I1216 07:44:48.077327 1823529 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":37639,"bootTime":1765833449,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1216 07:44:48.077439 1823529 start.go:143] virtualization:  
	I1216 07:44:48.080733 1823529 out.go:179] * [pause-375517] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 07:44:48.084588 1823529 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 07:44:48.084714 1823529 notify.go:221] Checking for updates...
	I1216 07:44:48.090394 1823529 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 07:44:48.093279 1823529 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 07:44:48.096062 1823529 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube
	I1216 07:44:48.099044 1823529 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 07:44:48.102030 1823529 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 07:44:48.105678 1823529 config.go:182] Loaded profile config "pause-375517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:44:48.106334 1823529 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 07:44:48.139693 1823529 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 07:44:48.139814 1823529 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 07:44:48.209525 1823529 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-16 07:44:48.200060043 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 07:44:48.209625 1823529 docker.go:319] overlay module found
	I1216 07:44:48.212847 1823529 out.go:179] * Using the docker driver based on existing profile
	I1216 07:44:48.215837 1823529 start.go:309] selected driver: docker
	I1216 07:44:48.215862 1823529 start.go:927] validating driver "docker" against &{Name:pause-375517 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-375517 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 07:44:48.216005 1823529 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 07:44:48.216133 1823529 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 07:44:48.272891 1823529 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-16 07:44:48.262435736 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 07:44:48.273321 1823529 cni.go:84] Creating CNI manager for ""
	I1216 07:44:48.273385 1823529 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 07:44:48.273438 1823529 start.go:353] cluster config:
	{Name:pause-375517 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-375517 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 07:44:48.276792 1823529 out.go:179] * Starting "pause-375517" primary control-plane node in "pause-375517" cluster
	I1216 07:44:48.279633 1823529 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 07:44:48.282657 1823529 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 07:44:48.285584 1823529 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 07:44:48.285635 1823529 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1216 07:44:48.285645 1823529 cache.go:65] Caching tarball of preloaded images
	I1216 07:44:48.285739 1823529 preload.go:238] Found /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 07:44:48.285751 1823529 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 07:44:48.285896 1823529 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/pause-375517/config.json ...
	I1216 07:44:48.286137 1823529 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 07:44:48.306982 1823529 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 07:44:48.307009 1823529 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 07:44:48.307025 1823529 cache.go:243] Successfully downloaded all kic artifacts
	I1216 07:44:48.307058 1823529 start.go:360] acquireMachinesLock for pause-375517: {Name:mk835939422fc9fc96e0811c1d3d47bbe9b9c1a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 07:44:48.307117 1823529 start.go:364] duration metric: took 36.775µs to acquireMachinesLock for "pause-375517"
	I1216 07:44:48.307142 1823529 start.go:96] Skipping create...Using existing machine configuration
	I1216 07:44:48.307148 1823529 fix.go:54] fixHost starting: 
	I1216 07:44:48.307418 1823529 cli_runner.go:164] Run: docker container inspect pause-375517 --format={{.State.Status}}
	I1216 07:44:48.325101 1823529 fix.go:112] recreateIfNeeded on pause-375517: state=Running err=<nil>
	W1216 07:44:48.325132 1823529 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 07:44:48.328593 1823529 out.go:252] * Updating the running docker "pause-375517" container ...
	I1216 07:44:48.328635 1823529 machine.go:94] provisionDockerMachine start ...
	I1216 07:44:48.328735 1823529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-375517
	I1216 07:44:48.346960 1823529 main.go:143] libmachine: Using SSH client type: native
	I1216 07:44:48.347362 1823529 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34525 <nil> <nil>}
	I1216 07:44:48.347379 1823529 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 07:44:48.484261 1823529 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-375517
	
	I1216 07:44:48.484284 1823529 ubuntu.go:182] provisioning hostname "pause-375517"
	I1216 07:44:48.484373 1823529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-375517
	I1216 07:44:48.503691 1823529 main.go:143] libmachine: Using SSH client type: native
	I1216 07:44:48.504007 1823529 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34525 <nil> <nil>}
	I1216 07:44:48.504017 1823529 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-375517 && echo "pause-375517" | sudo tee /etc/hostname
	I1216 07:44:48.647762 1823529 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-375517
	
	I1216 07:44:48.647841 1823529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-375517
	I1216 07:44:48.673582 1823529 main.go:143] libmachine: Using SSH client type: native
	I1216 07:44:48.673888 1823529 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34525 <nil> <nil>}
	I1216 07:44:48.673904 1823529 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-375517' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-375517/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-375517' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 07:44:48.812845 1823529 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 07:44:48.812913 1823529 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-1596013/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-1596013/.minikube}
	I1216 07:44:48.812950 1823529 ubuntu.go:190] setting up certificates
	I1216 07:44:48.812959 1823529 provision.go:84] configureAuth start
	I1216 07:44:48.813026 1823529 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-375517
	I1216 07:44:48.831826 1823529 provision.go:143] copyHostCerts
	I1216 07:44:48.831911 1823529 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem, removing ...
	I1216 07:44:48.831927 1823529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 07:44:48.832001 1823529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem (1078 bytes)
	I1216 07:44:48.832108 1823529 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem, removing ...
	I1216 07:44:48.832120 1823529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 07:44:48.832149 1823529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem (1123 bytes)
	I1216 07:44:48.832219 1823529 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem, removing ...
	I1216 07:44:48.832230 1823529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 07:44:48.832257 1823529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem (1675 bytes)
	I1216 07:44:48.832361 1823529 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem org=jenkins.pause-375517 san=[127.0.0.1 192.168.85.2 localhost minikube pause-375517]
	I1216 07:44:49.085236 1823529 provision.go:177] copyRemoteCerts
	I1216 07:44:49.085302 1823529 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 07:44:49.085347 1823529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-375517
	I1216 07:44:49.104018 1823529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34525 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/pause-375517/id_rsa Username:docker}
	I1216 07:44:49.200766 1823529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 07:44:49.219066 1823529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1216 07:44:49.236331 1823529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 07:44:49.254404 1823529 provision.go:87] duration metric: took 441.422399ms to configureAuth
	I1216 07:44:49.254433 1823529 ubuntu.go:206] setting minikube options for container-runtime
	I1216 07:44:49.254661 1823529 config.go:182] Loaded profile config "pause-375517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:44:49.254794 1823529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-375517
	I1216 07:44:49.271376 1823529 main.go:143] libmachine: Using SSH client type: native
	I1216 07:44:49.271683 1823529 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34525 <nil> <nil>}
	I1216 07:44:49.271707 1823529 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 07:44:54.615946 1823529 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 07:44:54.615969 1823529 machine.go:97] duration metric: took 6.28732437s to provisionDockerMachine
	I1216 07:44:54.615982 1823529 start.go:293] postStartSetup for "pause-375517" (driver="docker")
	I1216 07:44:54.615993 1823529 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 07:44:54.616058 1823529 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 07:44:54.616106 1823529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-375517
	I1216 07:44:54.633937 1823529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34525 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/pause-375517/id_rsa Username:docker}
	I1216 07:44:54.740497 1823529 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 07:44:54.743957 1823529 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 07:44:54.743984 1823529 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 07:44:54.743996 1823529 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/addons for local assets ...
	I1216 07:44:54.744051 1823529 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/files for local assets ...
	I1216 07:44:54.744144 1823529 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> 15992552.pem in /etc/ssl/certs
	I1216 07:44:54.744257 1823529 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 07:44:54.752079 1823529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 07:44:54.770147 1823529 start.go:296] duration metric: took 154.139956ms for postStartSetup
	I1216 07:44:54.770231 1823529 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 07:44:54.770277 1823529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-375517
	I1216 07:44:54.788497 1823529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34525 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/pause-375517/id_rsa Username:docker}
	I1216 07:44:54.881870 1823529 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 07:44:54.886965 1823529 fix.go:56] duration metric: took 6.579809252s for fixHost
	I1216 07:44:54.886990 1823529 start.go:83] releasing machines lock for "pause-375517", held for 6.579860584s
	I1216 07:44:54.887073 1823529 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-375517
	I1216 07:44:54.903786 1823529 ssh_runner.go:195] Run: cat /version.json
	I1216 07:44:54.903850 1823529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-375517
	I1216 07:44:54.904141 1823529 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 07:44:54.904211 1823529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-375517
	I1216 07:44:54.924117 1823529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34525 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/pause-375517/id_rsa Username:docker}
	I1216 07:44:54.924001 1823529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34525 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/pause-375517/id_rsa Username:docker}
	I1216 07:44:55.018714 1823529 ssh_runner.go:195] Run: systemctl --version
	I1216 07:44:55.108785 1823529 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 07:44:55.155497 1823529 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 07:44:55.160408 1823529 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 07:44:55.160596 1823529 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 07:44:55.169532 1823529 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 07:44:55.169570 1823529 start.go:496] detecting cgroup driver to use...
	I1216 07:44:55.169603 1823529 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 07:44:55.169681 1823529 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 07:44:55.185653 1823529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 07:44:55.200250 1823529 docker.go:218] disabling cri-docker service (if available) ...
	I1216 07:44:55.200327 1823529 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 07:44:55.216454 1823529 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 07:44:55.230222 1823529 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 07:44:55.367656 1823529 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 07:44:55.531876 1823529 docker.go:234] disabling docker service ...
	I1216 07:44:55.531966 1823529 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 07:44:55.547106 1823529 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 07:44:55.559974 1823529 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 07:44:55.694458 1823529 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 07:44:55.822258 1823529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 07:44:55.836572 1823529 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 07:44:55.852139 1823529 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 07:44:55.852234 1823529 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:44:55.861543 1823529 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 07:44:55.861656 1823529 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:44:55.870646 1823529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:44:55.879365 1823529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:44:55.888442 1823529 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 07:44:55.897071 1823529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:44:55.906344 1823529 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:44:55.916350 1823529 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:44:55.925140 1823529 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 07:44:55.932830 1823529 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 07:44:55.940253 1823529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 07:44:56.078291 1823529 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 07:44:56.305554 1823529 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 07:44:56.305629 1823529 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 07:44:56.309446 1823529 start.go:564] Will wait 60s for crictl version
	I1216 07:44:56.309509 1823529 ssh_runner.go:195] Run: which crictl
	I1216 07:44:56.312945 1823529 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 07:44:56.338269 1823529 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 07:44:56.338374 1823529 ssh_runner.go:195] Run: crio --version
	I1216 07:44:56.366585 1823529 ssh_runner.go:195] Run: crio --version
	I1216 07:44:56.397796 1823529 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 07:44:56.400805 1823529 cli_runner.go:164] Run: docker network inspect pause-375517 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 07:44:56.417311 1823529 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1216 07:44:56.421223 1823529 kubeadm.go:884] updating cluster {Name:pause-375517 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-375517 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 07:44:56.421376 1823529 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 07:44:56.421436 1823529 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 07:44:56.454947 1823529 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 07:44:56.454972 1823529 crio.go:433] Images already preloaded, skipping extraction
	I1216 07:44:56.455030 1823529 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 07:44:56.480411 1823529 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 07:44:56.480436 1823529 cache_images.go:86] Images are preloaded, skipping loading
	I1216 07:44:56.480451 1823529 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1216 07:44:56.480580 1823529 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-375517 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-375517 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 07:44:56.480667 1823529 ssh_runner.go:195] Run: crio config
	I1216 07:44:56.555939 1823529 cni.go:84] Creating CNI manager for ""
	I1216 07:44:56.555964 1823529 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 07:44:56.555983 1823529 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 07:44:56.556015 1823529 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-375517 NodeName:pause-375517 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 07:44:56.556168 1823529 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-375517"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 07:44:56.556262 1823529 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 07:44:56.564106 1823529 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 07:44:56.564209 1823529 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 07:44:56.571784 1823529 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1216 07:44:56.584620 1823529 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 07:44:56.597347 1823529 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1216 07:44:56.610645 1823529 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1216 07:44:56.614578 1823529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 07:44:56.746429 1823529 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 07:44:56.760652 1823529 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/pause-375517 for IP: 192.168.85.2
	I1216 07:44:56.760676 1823529 certs.go:195] generating shared ca certs ...
	I1216 07:44:56.760693 1823529 certs.go:227] acquiring lock for ca certs: {Name:mkbf72d2e438185e2867d262e148d82e5455cccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:44:56.760829 1823529 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key
	I1216 07:44:56.760879 1823529 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key
	I1216 07:44:56.760892 1823529 certs.go:257] generating profile certs ...
	I1216 07:44:56.760993 1823529 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/pause-375517/client.key
	I1216 07:44:56.761065 1823529 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/pause-375517/apiserver.key.257effa7
	I1216 07:44:56.761112 1823529 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/pause-375517/proxy-client.key
	I1216 07:44:56.761222 1823529 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem (1338 bytes)
	W1216 07:44:56.761265 1823529 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255_empty.pem, impossibly tiny 0 bytes
	I1216 07:44:56.761276 1823529 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 07:44:56.761307 1823529 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem (1078 bytes)
	I1216 07:44:56.761340 1823529 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem (1123 bytes)
	I1216 07:44:56.761367 1823529 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem (1675 bytes)
	I1216 07:44:56.761417 1823529 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 07:44:56.762034 1823529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 07:44:56.779600 1823529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 07:44:56.797895 1823529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 07:44:56.814757 1823529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 07:44:56.833312 1823529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/pause-375517/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 07:44:56.850941 1823529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/pause-375517/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 07:44:56.869113 1823529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/pause-375517/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 07:44:56.892252 1823529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/pause-375517/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 07:44:56.917906 1823529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /usr/share/ca-certificates/15992552.pem (1708 bytes)
	I1216 07:44:56.941827 1823529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 07:44:56.964756 1823529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem --> /usr/share/ca-certificates/1599255.pem (1338 bytes)
	I1216 07:44:56.983524 1823529 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 07:44:56.996286 1823529 ssh_runner.go:195] Run: openssl version
	I1216 07:44:57.003826 1823529 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15992552.pem
	I1216 07:44:57.013550 1823529 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15992552.pem /etc/ssl/certs/15992552.pem
	I1216 07:44:57.021712 1823529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15992552.pem
	I1216 07:44:57.025634 1823529 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 06:24 /usr/share/ca-certificates/15992552.pem
	I1216 07:44:57.025732 1823529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15992552.pem
	I1216 07:44:57.066994 1823529 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 07:44:57.074443 1823529 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:44:57.081727 1823529 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 07:44:57.089174 1823529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:44:57.093100 1823529 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:44:57.093168 1823529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:44:57.134590 1823529 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 07:44:57.142139 1823529 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1599255.pem
	I1216 07:44:57.149615 1823529 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1599255.pem /etc/ssl/certs/1599255.pem
	I1216 07:44:57.157482 1823529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1599255.pem
	I1216 07:44:57.161122 1823529 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 06:24 /usr/share/ca-certificates/1599255.pem
	I1216 07:44:57.161191 1823529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1599255.pem
	I1216 07:44:57.202523 1823529 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 07:44:57.209955 1823529 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 07:44:57.214016 1823529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 07:44:57.255714 1823529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 07:44:57.296801 1823529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 07:44:57.338358 1823529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 07:44:57.379481 1823529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 07:44:57.420603 1823529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 07:44:57.462583 1823529 kubeadm.go:401] StartCluster: {Name:pause-375517 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-375517 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 07:44:57.462717 1823529 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 07:44:57.462783 1823529 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 07:44:57.492779 1823529 cri.go:89] found id: "e6e642813259586fc8af749f164ed270ad6375c625923994b9cd7781a9f840fc"
	I1216 07:44:57.492808 1823529 cri.go:89] found id: "6f2801c86394d7c2f2aa5eb96605703ff62f0e8b300e513300a2ca2ebe69abd9"
	I1216 07:44:57.492813 1823529 cri.go:89] found id: "76751aa856240697939bca0a05e7d09ba45b9f61fb6c295aaabb8abcd8159e58"
	I1216 07:44:57.492816 1823529 cri.go:89] found id: "618d84d84bc9252e472478ee25b67f5df79c96360a711e03237ac217587aba39"
	I1216 07:44:57.492819 1823529 cri.go:89] found id: "517fe82d993ee6f5d824214ffe285d9f264597a2c18d89d85e7f4597b139afb3"
	I1216 07:44:57.492822 1823529 cri.go:89] found id: "2463d298843b3b6c96a2c0d31eeb6e502860ca63ff1ab141388cacd59870583f"
	I1216 07:44:57.492826 1823529 cri.go:89] found id: "a1cf99c5084156ca31d04538b1e6cde3c65b6503d4a9ab7f17a0b4226501fd7a"
	I1216 07:44:57.492828 1823529 cri.go:89] found id: ""
	I1216 07:44:57.492878 1823529 ssh_runner.go:195] Run: sudo runc list -f json
	W1216 07:44:57.503902 1823529 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T07:44:57Z" level=error msg="open /run/runc: no such file or directory"
	I1216 07:44:57.503982 1823529 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 07:44:57.512245 1823529 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 07:44:57.512266 1823529 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 07:44:57.512355 1823529 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 07:44:57.520041 1823529 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 07:44:57.520767 1823529 kubeconfig.go:125] found "pause-375517" server: "https://192.168.85.2:8443"
	I1216 07:44:57.521586 1823529 kapi.go:59] client config for pause-375517: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/pause-375517/client.crt", KeyFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/pause-375517/client.key", CAFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 07:44:57.522078 1823529 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 07:44:57.522093 1823529 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1216 07:44:57.522100 1823529 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1216 07:44:57.522104 1823529 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 07:44:57.522111 1823529 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1216 07:44:57.522391 1823529 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 07:44:57.530246 1823529 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1216 07:44:57.530327 1823529 kubeadm.go:602] duration metric: took 18.053919ms to restartPrimaryControlPlane
	I1216 07:44:57.530352 1823529 kubeadm.go:403] duration metric: took 67.777605ms to StartCluster
	I1216 07:44:57.530392 1823529 settings.go:142] acquiring lock: {Name:mk011eec7aa10b3db81dce3dc7edf51f985e2ce2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:44:57.530465 1823529 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 07:44:57.531273 1823529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/kubeconfig: {Name:mk61a8e87d869d27c5acc78145bae6b02a8088a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:44:57.531489 1823529 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 07:44:57.531808 1823529 config.go:182] Loaded profile config "pause-375517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:44:57.531856 1823529 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 07:44:57.537625 1823529 out.go:179] * Enabled addons: 
	I1216 07:44:57.537625 1823529 out.go:179] * Verifying Kubernetes components...
	I1216 07:44:57.540520 1823529 addons.go:530] duration metric: took 8.65779ms for enable addons: enabled=[]
	I1216 07:44:57.540568 1823529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 07:44:57.685553 1823529 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 07:44:57.698688 1823529 node_ready.go:35] waiting up to 6m0s for node "pause-375517" to be "Ready" ...
	I1216 07:45:02.974934 1823529 node_ready.go:49] node "pause-375517" is "Ready"
	I1216 07:45:02.974962 1823529 node_ready.go:38] duration metric: took 5.276231055s for node "pause-375517" to be "Ready" ...
	I1216 07:45:02.974976 1823529 api_server.go:52] waiting for apiserver process to appear ...
	I1216 07:45:02.975038 1823529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:45:02.997105 1823529 api_server.go:72] duration metric: took 5.465578703s to wait for apiserver process to appear ...
	I1216 07:45:02.997129 1823529 api_server.go:88] waiting for apiserver healthz status ...
	I1216 07:45:02.997148 1823529 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 07:45:03.210047 1823529 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:45:03.210075 1823529 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:45:03.497463 1823529 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 07:45:03.506009 1823529 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:45:03.506160 1823529 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:45:03.997828 1823529 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 07:45:04.007343 1823529 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1216 07:45:04.008901 1823529 api_server.go:141] control plane version: v1.34.2
	I1216 07:45:04.008939 1823529 api_server.go:131] duration metric: took 1.01180219s to wait for apiserver health ...
	I1216 07:45:04.008952 1823529 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 07:45:04.012716 1823529 system_pods.go:59] 7 kube-system pods found
	I1216 07:45:04.012758 1823529 system_pods.go:61] "coredns-66bc5c9577-92vwf" [e9e3bf22-e909-407e-ac7a-67a7dbb2a7b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 07:45:04.012769 1823529 system_pods.go:61] "etcd-pause-375517" [00690516-0fd6-4a7d-9abb-d21dd49bd0a4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 07:45:04.012775 1823529 system_pods.go:61] "kindnet-cmscz" [c976794b-50e2-4508-a421-d94c7c247cd7] Running
	I1216 07:45:04.012783 1823529 system_pods.go:61] "kube-apiserver-pause-375517" [5a339fa1-1300-4124-a183-16077c517dd3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 07:45:04.012790 1823529 system_pods.go:61] "kube-controller-manager-pause-375517" [c07521e2-9ac8-4116-a985-f7a1664e4006] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 07:45:04.012795 1823529 system_pods.go:61] "kube-proxy-t4gtq" [b5cbe5d5-2616-47da-9e2c-3320915cc6a2] Running
	I1216 07:45:04.012808 1823529 system_pods.go:61] "kube-scheduler-pause-375517" [e9c2ba56-3b59-4bd1-92a9-6ce46072165d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 07:45:04.012813 1823529 system_pods.go:74] duration metric: took 3.856271ms to wait for pod list to return data ...
	I1216 07:45:04.012826 1823529 default_sa.go:34] waiting for default service account to be created ...
	I1216 07:45:04.016010 1823529 default_sa.go:45] found service account: "default"
	I1216 07:45:04.016041 1823529 default_sa.go:55] duration metric: took 3.20732ms for default service account to be created ...
	I1216 07:45:04.016052 1823529 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 07:45:04.019631 1823529 system_pods.go:86] 7 kube-system pods found
	I1216 07:45:04.019670 1823529 system_pods.go:89] "coredns-66bc5c9577-92vwf" [e9e3bf22-e909-407e-ac7a-67a7dbb2a7b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 07:45:04.019680 1823529 system_pods.go:89] "etcd-pause-375517" [00690516-0fd6-4a7d-9abb-d21dd49bd0a4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 07:45:04.019687 1823529 system_pods.go:89] "kindnet-cmscz" [c976794b-50e2-4508-a421-d94c7c247cd7] Running
	I1216 07:45:04.019695 1823529 system_pods.go:89] "kube-apiserver-pause-375517" [5a339fa1-1300-4124-a183-16077c517dd3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 07:45:04.019703 1823529 system_pods.go:89] "kube-controller-manager-pause-375517" [c07521e2-9ac8-4116-a985-f7a1664e4006] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 07:45:04.019710 1823529 system_pods.go:89] "kube-proxy-t4gtq" [b5cbe5d5-2616-47da-9e2c-3320915cc6a2] Running
	I1216 07:45:04.019730 1823529 system_pods.go:89] "kube-scheduler-pause-375517" [e9c2ba56-3b59-4bd1-92a9-6ce46072165d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 07:45:04.019738 1823529 system_pods.go:126] duration metric: took 3.680958ms to wait for k8s-apps to be running ...
	I1216 07:45:04.019750 1823529 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 07:45:04.019817 1823529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 07:45:04.033755 1823529 system_svc.go:56] duration metric: took 13.995496ms WaitForService to wait for kubelet
	I1216 07:45:04.033829 1823529 kubeadm.go:587] duration metric: took 6.502307187s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 07:45:04.033876 1823529 node_conditions.go:102] verifying NodePressure condition ...
	I1216 07:45:04.037691 1823529 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1216 07:45:04.037725 1823529 node_conditions.go:123] node cpu capacity is 2
	I1216 07:45:04.037738 1823529 node_conditions.go:105] duration metric: took 3.833329ms to run NodePressure ...
	I1216 07:45:04.037771 1823529 start.go:242] waiting for startup goroutines ...
	I1216 07:45:04.037786 1823529 start.go:247] waiting for cluster config update ...
	I1216 07:45:04.037798 1823529 start.go:256] writing updated cluster config ...
	I1216 07:45:04.038143 1823529 ssh_runner.go:195] Run: rm -f paused
	I1216 07:45:04.041934 1823529 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 07:45:04.042557 1823529 kapi.go:59] client config for pause-375517: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/pause-375517/client.crt", KeyFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/pause-375517/client.key", CAFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 07:45:04.045966 1823529 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-92vwf" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 07:45:06.051889 1823529 pod_ready.go:104] pod "coredns-66bc5c9577-92vwf" is not "Ready", error: <nil>
	W1216 07:45:08.052575 1823529 pod_ready.go:104] pod "coredns-66bc5c9577-92vwf" is not "Ready", error: <nil>
	I1216 07:45:10.551685 1823529 pod_ready.go:94] pod "coredns-66bc5c9577-92vwf" is "Ready"
	I1216 07:45:10.551719 1823529 pod_ready.go:86] duration metric: took 6.50572704s for pod "coredns-66bc5c9577-92vwf" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:45:10.554435 1823529 pod_ready.go:83] waiting for pod "etcd-pause-375517" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 07:45:12.560436 1823529 pod_ready.go:104] pod "etcd-pause-375517" is not "Ready", error: <nil>
	W1216 07:45:14.560910 1823529 pod_ready.go:104] pod "etcd-pause-375517" is not "Ready", error: <nil>
	I1216 07:45:16.060641 1823529 pod_ready.go:94] pod "etcd-pause-375517" is "Ready"
	I1216 07:45:16.060667 1823529 pod_ready.go:86] duration metric: took 5.50620337s for pod "etcd-pause-375517" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:45:16.068855 1823529 pod_ready.go:83] waiting for pod "kube-apiserver-pause-375517" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:45:16.075257 1823529 pod_ready.go:94] pod "kube-apiserver-pause-375517" is "Ready"
	I1216 07:45:16.075297 1823529 pod_ready.go:86] duration metric: took 6.410997ms for pod "kube-apiserver-pause-375517" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:45:16.077840 1823529 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-375517" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:45:16.082286 1823529 pod_ready.go:94] pod "kube-controller-manager-pause-375517" is "Ready"
	I1216 07:45:16.082311 1823529 pod_ready.go:86] duration metric: took 4.444371ms for pod "kube-controller-manager-pause-375517" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:45:16.084831 1823529 pod_ready.go:83] waiting for pod "kube-proxy-t4gtq" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:45:16.257416 1823529 pod_ready.go:94] pod "kube-proxy-t4gtq" is "Ready"
	I1216 07:45:16.257449 1823529 pod_ready.go:86] duration metric: took 172.590813ms for pod "kube-proxy-t4gtq" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:45:16.457377 1823529 pod_ready.go:83] waiting for pod "kube-scheduler-pause-375517" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:45:16.857449 1823529 pod_ready.go:94] pod "kube-scheduler-pause-375517" is "Ready"
	I1216 07:45:16.857480 1823529 pod_ready.go:86] duration metric: took 400.015387ms for pod "kube-scheduler-pause-375517" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:45:16.857494 1823529 pod_ready.go:40] duration metric: took 12.815527566s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 07:45:16.916419 1823529 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1216 07:45:16.919675 1823529 out.go:179] * Done! kubectl is now configured to use "pause-375517" cluster and "default" namespace by default
	I1216 07:45:18.320076 1798136 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001193351s
	I1216 07:45:18.320115 1798136 kubeadm.go:319] 
	I1216 07:45:18.320179 1798136 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 07:45:18.320232 1798136 kubeadm.go:319] 	- The kubelet is not running
	I1216 07:45:18.320356 1798136 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 07:45:18.320369 1798136 kubeadm.go:319] 
	I1216 07:45:18.320516 1798136 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 07:45:18.320565 1798136 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 07:45:18.320600 1798136 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 07:45:18.320608 1798136 kubeadm.go:319] 
	I1216 07:45:18.325958 1798136 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 07:45:18.326491 1798136 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 07:45:18.326621 1798136 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 07:45:18.326926 1798136 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 07:45:18.326936 1798136 kubeadm.go:319] 
	I1216 07:45:18.327017 1798136 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1216 07:45:18.327166 1798136 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001193351s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 07:45:18.327284 1798136 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 07:45:18.754392 1798136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 07:45:18.772231 1798136 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 07:45:18.772294 1798136 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 07:45:18.783699 1798136 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 07:45:18.783726 1798136 kubeadm.go:158] found existing configuration files:
	
	I1216 07:45:18.783783 1798136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 07:45:18.793812 1798136 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 07:45:18.793878 1798136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 07:45:18.802961 1798136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 07:45:18.812328 1798136 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 07:45:18.812395 1798136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 07:45:18.821315 1798136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 07:45:18.832420 1798136 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 07:45:18.832563 1798136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 07:45:18.842333 1798136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 07:45:18.852116 1798136 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 07:45:18.852185 1798136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 07:45:18.860819 1798136 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 07:45:18.915421 1798136 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 07:45:18.915483 1798136 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 07:45:19.051011 1798136 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 07:45:19.051083 1798136 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 07:45:19.051119 1798136 kubeadm.go:319] OS: Linux
	I1216 07:45:19.051164 1798136 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 07:45:19.051212 1798136 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 07:45:19.051259 1798136 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 07:45:19.051307 1798136 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 07:45:19.051355 1798136 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 07:45:19.051403 1798136 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 07:45:19.051448 1798136 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 07:45:19.051496 1798136 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 07:45:19.051542 1798136 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 07:45:19.142528 1798136 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 07:45:19.143196 1798136 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 07:45:19.143363 1798136 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 07:45:19.156893 1798136 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	
	==> CRI-O <==
	Dec 16 07:44:58 pause-375517 crio[2070]: time="2025-12-16T07:44:58.260071225Z" level=info msg="Started container" PID=2374 containerID=e10fa0b3d14a2100bd2a09bb4836e34a4acbec6c8e118a3af3a2450450d66d20 description=kube-system/kube-scheduler-pause-375517/kube-scheduler id=1a7f5956-fb90-4a1c-8575-0a512ef296c3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0dde12892f826f842f45ba837580b02656d2c8b50a2ac260c648c8f9af74410f
	Dec 16 07:44:58 pause-375517 crio[2070]: time="2025-12-16T07:44:58.265694144Z" level=info msg="Created container 546b094189b407e12be0ffc6c3a12d42b955bed31b5fc2d5b8d7bf597ce67cb2: kube-system/etcd-pause-375517/etcd" id=f2eae53a-11f0-455a-8c99-a95b1d4e7615 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 07:44:58 pause-375517 crio[2070]: time="2025-12-16T07:44:58.267968334Z" level=info msg="Starting container: 546b094189b407e12be0ffc6c3a12d42b955bed31b5fc2d5b8d7bf597ce67cb2" id=3274bde2-420c-4d4f-ab64-47d191b5f737 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 07:44:58 pause-375517 crio[2070]: time="2025-12-16T07:44:58.276077245Z" level=info msg="Started container" PID=2381 containerID=546b094189b407e12be0ffc6c3a12d42b955bed31b5fc2d5b8d7bf597ce67cb2 description=kube-system/etcd-pause-375517/etcd id=3274bde2-420c-4d4f-ab64-47d191b5f737 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3f6124b2ebb5213ec462756693ee7757731a3f434da27f430335852b0c001eac
	Dec 16 07:44:58 pause-375517 crio[2070]: time="2025-12-16T07:44:58.298970074Z" level=info msg="Created container 54832596223e3df0d47d266141e625d237f37c3dab9cb50e79333493b8497255: kube-system/kindnet-cmscz/kindnet-cni" id=3af3a3a8-7c93-4e6d-b58e-12f8a6013edc name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 07:44:58 pause-375517 crio[2070]: time="2025-12-16T07:44:58.299864919Z" level=info msg="Starting container: 54832596223e3df0d47d266141e625d237f37c3dab9cb50e79333493b8497255" id=621910e7-d15a-4e5f-b221-9c67b28a62c6 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 07:44:58 pause-375517 crio[2070]: time="2025-12-16T07:44:58.306394046Z" level=info msg="Started container" PID=2397 containerID=54832596223e3df0d47d266141e625d237f37c3dab9cb50e79333493b8497255 description=kube-system/kindnet-cmscz/kindnet-cni id=621910e7-d15a-4e5f-b221-9c67b28a62c6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=346db1de4bb636ea82d10111e582b881ca1a7df62780833b755a35939bd6f09f
	Dec 16 07:44:58 pause-375517 crio[2070]: time="2025-12-16T07:44:58.378076939Z" level=info msg="Created container 4f3eca46adabd7173fb72cdd4291f5038016ba3595d2347359993260c0559ca1: kube-system/kube-proxy-t4gtq/kube-proxy" id=2f0cef4b-bf2a-44cf-952b-18de59bc303e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 07:44:58 pause-375517 crio[2070]: time="2025-12-16T07:44:58.378991911Z" level=info msg="Starting container: 4f3eca46adabd7173fb72cdd4291f5038016ba3595d2347359993260c0559ca1" id=a920dafa-9a00-43ea-ab40-df194f9c1fb3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 07:44:58 pause-375517 crio[2070]: time="2025-12-16T07:44:58.381799932Z" level=info msg="Started container" PID=2389 containerID=4f3eca46adabd7173fb72cdd4291f5038016ba3595d2347359993260c0559ca1 description=kube-system/kube-proxy-t4gtq/kube-proxy id=a920dafa-9a00-43ea-ab40-df194f9c1fb3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=56505329b3ab77977b95542cfc28bc98dd8247b6a161b0f6d78ebaef52aedf7a
	Dec 16 07:45:08 pause-375517 crio[2070]: time="2025-12-16T07:45:08.636724129Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 16 07:45:08 pause-375517 crio[2070]: time="2025-12-16T07:45:08.640378124Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 16 07:45:08 pause-375517 crio[2070]: time="2025-12-16T07:45:08.640586832Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 16 07:45:08 pause-375517 crio[2070]: time="2025-12-16T07:45:08.640627522Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 16 07:45:08 pause-375517 crio[2070]: time="2025-12-16T07:45:08.643908492Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 16 07:45:08 pause-375517 crio[2070]: time="2025-12-16T07:45:08.643939705Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 16 07:45:08 pause-375517 crio[2070]: time="2025-12-16T07:45:08.64396158Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 16 07:45:08 pause-375517 crio[2070]: time="2025-12-16T07:45:08.647055274Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 16 07:45:08 pause-375517 crio[2070]: time="2025-12-16T07:45:08.647089679Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 16 07:45:08 pause-375517 crio[2070]: time="2025-12-16T07:45:08.647112752Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 16 07:45:08 pause-375517 crio[2070]: time="2025-12-16T07:45:08.650156952Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 16 07:45:08 pause-375517 crio[2070]: time="2025-12-16T07:45:08.650192472Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 16 07:45:08 pause-375517 crio[2070]: time="2025-12-16T07:45:08.650217974Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 16 07:45:08 pause-375517 crio[2070]: time="2025-12-16T07:45:08.653408925Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 16 07:45:08 pause-375517 crio[2070]: time="2025-12-16T07:45:08.653445102Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	54832596223e3       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   21 seconds ago       Running             kindnet-cni               1                   346db1de4bb63       kindnet-cmscz                          kube-system
	4f3eca46adabd       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   21 seconds ago       Running             kube-proxy                1                   56505329b3ab7       kube-proxy-t4gtq                       kube-system
	6f71b54f1ba1c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   22 seconds ago       Running             coredns                   1                   a3b997adb651d       coredns-66bc5c9577-92vwf               kube-system
	546b094189b40       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   22 seconds ago       Running             etcd                      1                   3f6124b2ebb52       etcd-pause-375517                      kube-system
	e10fa0b3d14a2       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   22 seconds ago       Running             kube-scheduler            1                   0dde12892f826       kube-scheduler-pause-375517            kube-system
	8a86aee515e65       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   22 seconds ago       Running             kube-apiserver            1                   74979dd3fad13       kube-apiserver-pause-375517            kube-system
	ca7b3191cff4e       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   22 seconds ago       Running             kube-controller-manager   1                   eb6970570e113       kube-controller-manager-pause-375517   kube-system
	e6e6428132595       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   34 seconds ago       Exited              coredns                   0                   a3b997adb651d       coredns-66bc5c9577-92vwf               kube-system
	6f2801c86394d       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   About a minute ago   Exited              kube-proxy                0                   56505329b3ab7       kube-proxy-t4gtq                       kube-system
	76751aa856240       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   346db1de4bb63       kindnet-cmscz                          kube-system
	618d84d84bc92       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   About a minute ago   Exited              kube-apiserver            0                   74979dd3fad13       kube-apiserver-pause-375517            kube-system
	517fe82d993ee       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   About a minute ago   Exited              kube-controller-manager   0                   eb6970570e113       kube-controller-manager-pause-375517   kube-system
	2463d298843b3       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   About a minute ago   Exited              kube-scheduler            0                   0dde12892f826       kube-scheduler-pause-375517            kube-system
	a1cf99c508415       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   About a minute ago   Exited              etcd                      0                   3f6124b2ebb52       etcd-pause-375517                      kube-system
	
	
	==> coredns [6f71b54f1ba1c65db4deac9b06dfb0d64fce05e51d14ce123aa0eb55cc857a8a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51590 - 50993 "HINFO IN 7267632414797955180.1726797191412746475. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012659252s
	
	
	==> coredns [e6e642813259586fc8af749f164ed270ad6375c625923994b9cd7781a9f840fc] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46134 - 26844 "HINFO IN 1827242947691164020.8591283332183659217. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013998745s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-375517
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-375517
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f
	                    minikube.k8s.io/name=pause-375517
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T07_43_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 07:43:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-375517
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 07:45:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 07:45:00 +0000   Tue, 16 Dec 2025 07:43:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 07:45:00 +0000   Tue, 16 Dec 2025 07:43:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 07:45:00 +0000   Tue, 16 Dec 2025 07:43:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 07:45:00 +0000   Tue, 16 Dec 2025 07:44:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-375517
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                a4719513-6662-412e-8570-46ee1b2cab2a
	  Boot ID:                    c02b8f3a-b639-46a9-b38c-18c198a7a8c0
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-92vwf                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     77s
	  kube-system                 etcd-pause-375517                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         81s
	  kube-system                 kindnet-cmscz                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      77s
	  kube-system                 kube-apiserver-pause-375517             250m (12%)    0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-controller-manager-pause-375517    200m (10%)    0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-proxy-t4gtq                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 kube-scheduler-pause-375517             100m (5%)     0 (0%)      0 (0%)           0 (0%)         81s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 75s                kube-proxy       
	  Normal   Starting                 17s                kube-proxy       
	  Warning  CgroupV1                 90s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  90s (x8 over 90s)  kubelet          Node pause-375517 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    90s (x8 over 90s)  kubelet          Node pause-375517 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     90s (x8 over 90s)  kubelet          Node pause-375517 status is now: NodeHasSufficientPID
	  Normal   Starting                 82s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 82s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  81s                kubelet          Node pause-375517 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    81s                kubelet          Node pause-375517 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     81s                kubelet          Node pause-375517 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           78s                node-controller  Node pause-375517 event: Registered Node pause-375517 in Controller
	  Normal   NodeReady                35s                kubelet          Node pause-375517 status is now: NodeReady
	  Normal   RegisteredNode           14s                node-controller  Node pause-375517 event: Registered Node pause-375517 in Controller
	
	
	==> dmesg <==
	[  +2.815105] overlayfs: idmapped layers are currently not supported
	[Dec16 07:06] overlayfs: idmapped layers are currently not supported
	[Dec16 07:10] overlayfs: idmapped layers are currently not supported
	[Dec16 07:12] overlayfs: idmapped layers are currently not supported
	[Dec16 07:17] overlayfs: idmapped layers are currently not supported
	[ +33.022046] overlayfs: idmapped layers are currently not supported
	[Dec16 07:18] overlayfs: idmapped layers are currently not supported
	[Dec16 07:19] overlayfs: idmapped layers are currently not supported
	[Dec16 07:20] overlayfs: idmapped layers are currently not supported
	[Dec16 07:22] overlayfs: idmapped layers are currently not supported
	[Dec16 07:23] overlayfs: idmapped layers are currently not supported
	[  +6.617945] overlayfs: idmapped layers are currently not supported
	[ +47.625208] overlayfs: idmapped layers are currently not supported
	[Dec16 07:24] overlayfs: idmapped layers are currently not supported
	[Dec16 07:25] overlayfs: idmapped layers are currently not supported
	[ +25.916657] overlayfs: idmapped layers are currently not supported
	[Dec16 07:26] overlayfs: idmapped layers are currently not supported
	[Dec16 07:27] overlayfs: idmapped layers are currently not supported
	[Dec16 07:29] overlayfs: idmapped layers are currently not supported
	[Dec16 07:31] overlayfs: idmapped layers are currently not supported
	[Dec16 07:32] overlayfs: idmapped layers are currently not supported
	[ +24.023346] overlayfs: idmapped layers are currently not supported
	[Dec16 07:33] overlayfs: idmapped layers are currently not supported
	[Dec16 07:36] overlayfs: idmapped layers are currently not supported
	[Dec16 07:43] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [546b094189b407e12be0ffc6c3a12d42b955bed31b5fc2d5b8d7bf597ce67cb2] <==
	{"level":"warn","ts":"2025-12-16T07:45:00.833847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:00.866461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:00.908971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:00.954261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:00.980177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:01.024715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:01.060090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:01.083289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:01.156238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:01.171370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:01.206028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:01.242784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:01.271211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:01.290420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:01.325297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:01.375469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:01.389425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:01.426620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:01.497424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:01.515920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:01.598096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:01.625231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:01.670275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:01.742816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:01.975166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39058","server-name":"","error":"EOF"}
	
	
	==> etcd [a1cf99c5084156ca31d04538b1e6cde3c65b6503d4a9ab7f17a0b4226501fd7a] <==
	{"level":"warn","ts":"2025-12-16T07:43:54.624094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:43:54.645603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:43:54.665176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:43:54.695579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:43:54.722458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:43:54.765283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:43:54.857844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58788","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-16T07:44:49.429830Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-16T07:44:49.429891Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-375517","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-12-16T07:44:49.429980Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-16T07:44:49.710375Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-16T07:44:49.710490Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-16T07:44:49.710513Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-12-16T07:44:49.710547Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-16T07:44:49.710632Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-16T07:44:49.710671Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-16T07:44:49.710680Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-16T07:44:49.710693Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-16T07:44:49.710770Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-16T07:44:49.710822Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-16T07:44:49.710863Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-16T07:44:49.713641Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-12-16T07:44:49.713728Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-16T07:44:49.713759Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-16T07:44:49.713767Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-375517","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 07:45:20 up 10:27,  0 user,  load average: 1.66, 1.72, 1.79
	Linux pause-375517 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [54832596223e3df0d47d266141e625d237f37c3dab9cb50e79333493b8497255] <==
	I1216 07:44:58.434727       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 07:44:58.434956       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1216 07:44:58.435085       1 main.go:148] setting mtu 1500 for CNI 
	I1216 07:44:58.435097       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 07:44:58.435110       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T07:44:58Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 07:44:58.636732       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 07:44:58.636757       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 07:44:58.636766       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 07:44:58.637466       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1216 07:45:03.240711       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 07:45:03.240821       1 metrics.go:72] Registering metrics
	I1216 07:45:03.240900       1 controller.go:711] "Syncing nftables rules"
	I1216 07:45:08.636265       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1216 07:45:08.636345       1 main.go:301] handling current node
	I1216 07:45:18.637458       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1216 07:45:18.637613       1 main.go:301] handling current node
	
	
	==> kindnet [76751aa856240697939bca0a05e7d09ba45b9f61fb6c295aaabb8abcd8159e58] <==
	I1216 07:44:04.529769       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 07:44:04.530032       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1216 07:44:04.530156       1 main.go:148] setting mtu 1500 for CNI 
	I1216 07:44:04.530169       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 07:44:04.530187       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T07:44:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 07:44:04.727577       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 07:44:04.727690       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 07:44:04.727700       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 07:44:04.733296       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1216 07:44:34.727907       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1216 07:44:34.728050       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1216 07:44:34.729102       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1216 07:44:34.733429       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1216 07:44:36.028339       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 07:44:36.028390       1 metrics.go:72] Registering metrics
	I1216 07:44:36.028489       1 controller.go:711] "Syncing nftables rules"
	I1216 07:44:44.732362       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1216 07:44:44.732421       1 main.go:301] handling current node
	
	
	==> kube-apiserver [618d84d84bc9252e472478ee25b67f5df79c96360a711e03237ac217587aba39] <==
	W1216 07:44:49.453479       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.453522       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.453546       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.453570       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.453595       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.453620       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.453647       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.453673       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.453731       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.453756       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.453781       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.453808       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.453833       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.453856       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.453880       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.455439       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.455470       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.455498       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.455524       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.455562       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.455591       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.455616       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.455641       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.455668       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.452750       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [8a86aee515e658b93fc7d10adcd6891a6e4eba14453f9747b4a80a7a361f9266] <==
	I1216 07:45:03.136791       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1216 07:45:03.137744       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1216 07:45:03.156684       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1216 07:45:03.164567       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1216 07:45:03.164617       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1216 07:45:03.164675       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1216 07:45:03.175503       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1216 07:45:03.175542       1 policy_source.go:240] refreshing policies
	I1216 07:45:03.181756       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1216 07:45:03.182858       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1216 07:45:03.183062       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1216 07:45:03.183378       1 aggregator.go:171] initial CRD sync complete...
	I1216 07:45:03.183399       1 autoregister_controller.go:144] Starting autoregister controller
	I1216 07:45:03.183405       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1216 07:45:03.183412       1 cache.go:39] Caches are synced for autoregister controller
	I1216 07:45:03.194344       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 07:45:03.199169       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 07:45:03.213490       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E1216 07:45:03.221733       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1216 07:45:03.714007       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1216 07:45:04.936713       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1216 07:45:06.338563       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1216 07:45:06.576134       1 controller.go:667] quota admission added evaluator for: endpoints
	I1216 07:45:06.628012       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1216 07:45:06.731420       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [517fe82d993ee6f5d824214ffe285d9f264597a2c18d89d85e7f4597b139afb3] <==
	I1216 07:44:02.762695       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1216 07:44:02.763132       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1216 07:44:02.763470       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1216 07:44:02.765494       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1216 07:44:02.765582       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1216 07:44:02.767120       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1216 07:44:02.768397       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1216 07:44:02.769642       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1216 07:44:02.769702       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1216 07:44:02.769729       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1216 07:44:02.769735       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1216 07:44:02.769740       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1216 07:44:02.771850       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 07:44:02.778045       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1216 07:44:02.778137       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1216 07:44:02.778323       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 07:44:02.778459       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1216 07:44:02.779956       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-375517" podCIDRs=["10.244.0.0/24"]
	I1216 07:44:02.787358       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 07:44:02.792795       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1216 07:44:02.801666       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1216 07:44:02.810696       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 07:44:02.810720       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1216 07:44:02.810729       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1216 07:44:47.722605       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [ca7b3191cff4e83a60f9aac1be7a6876b73f4e2f8e10d231f2aa6d86edad73c5] <==
	I1216 07:45:06.370792       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 07:45:06.371353       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1216 07:45:06.371393       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1216 07:45:06.371094       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1216 07:45:06.371113       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1216 07:45:06.371123       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1216 07:45:06.372291       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1216 07:45:06.373117       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1216 07:45:06.373206       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1216 07:45:06.373253       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1216 07:45:06.373285       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1216 07:45:06.371165       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1216 07:45:06.371173       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1216 07:45:06.374267       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 07:45:06.380582       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1216 07:45:06.380749       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1216 07:45:06.380796       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1216 07:45:06.380782       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1216 07:45:06.383550       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1216 07:45:06.386674       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1216 07:45:06.388970       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1216 07:45:06.391412       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1216 07:45:06.394962       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1216 07:45:06.398111       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1216 07:45:06.400663       1 shared_informer.go:356] "Caches are synced" controller="GC"
	
	
	==> kube-proxy [4f3eca46adabd7173fb72cdd4291f5038016ba3595d2347359993260c0559ca1] <==
	I1216 07:45:01.974003       1 server_linux.go:53] "Using iptables proxy"
	I1216 07:45:03.232697       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1216 07:45:03.335188       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 07:45:03.335250       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1216 07:45:03.335374       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 07:45:03.372772       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 07:45:03.372908       1 server_linux.go:132] "Using iptables Proxier"
	I1216 07:45:03.379002       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 07:45:03.379383       1 server.go:527] "Version info" version="v1.34.2"
	I1216 07:45:03.379612       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 07:45:03.385035       1 config.go:200] "Starting service config controller"
	I1216 07:45:03.385121       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 07:45:03.385178       1 config.go:106] "Starting endpoint slice config controller"
	I1216 07:45:03.385207       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 07:45:03.385270       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 07:45:03.385298       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 07:45:03.394394       1 config.go:309] "Starting node config controller"
	I1216 07:45:03.394460       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 07:45:03.394470       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 07:45:03.485905       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 07:45:03.485899       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1216 07:45:03.485946       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [6f2801c86394d7c2f2aa5eb96605703ff62f0e8b300e513300a2ca2ebe69abd9] <==
	I1216 07:44:04.510965       1 server_linux.go:53] "Using iptables proxy"
	I1216 07:44:04.600434       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1216 07:44:04.700857       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 07:44:04.700897       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1216 07:44:04.700970       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 07:44:04.824232       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 07:44:04.824365       1 server_linux.go:132] "Using iptables Proxier"
	I1216 07:44:04.828676       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 07:44:04.829148       1 server.go:527] "Version info" version="v1.34.2"
	I1216 07:44:04.829211       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 07:44:04.830885       1 config.go:200] "Starting service config controller"
	I1216 07:44:04.830913       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 07:44:04.830930       1 config.go:106] "Starting endpoint slice config controller"
	I1216 07:44:04.830934       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 07:44:04.830945       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 07:44:04.830949       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 07:44:04.831619       1 config.go:309] "Starting node config controller"
	I1216 07:44:04.831640       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 07:44:04.831647       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 07:44:04.931382       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1216 07:44:04.931513       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 07:44:04.931563       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2463d298843b3b6c96a2c0d31eeb6e502860ca63ff1ab141388cacd59870583f] <==
	E1216 07:43:56.887123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1216 07:43:56.887296       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1216 07:43:56.887397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1216 07:43:56.887502       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1216 07:43:56.887674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1216 07:43:56.887886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1216 07:43:56.888018       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1216 07:43:56.888137       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1216 07:43:56.888455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1216 07:43:56.888673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1216 07:43:56.888790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1216 07:43:56.888883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1216 07:43:56.889547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1216 07:43:56.890519       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1216 07:43:56.890649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1216 07:43:56.890705       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1216 07:43:56.890748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1216 07:43:56.890948       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1216 07:43:57.770537       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 07:44:49.429317       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1216 07:44:49.429347       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1216 07:44:49.429366       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1216 07:44:49.429395       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 07:44:49.429580       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1216 07:44:49.429595       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e10fa0b3d14a2100bd2a09bb4836e34a4acbec6c8e118a3af3a2450450d66d20] <==
	I1216 07:45:02.130810       1 serving.go:386] Generated self-signed cert in-memory
	I1216 07:45:03.264190       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1216 07:45:03.264439       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 07:45:03.274219       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1216 07:45:03.274433       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1216 07:45:03.274482       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1216 07:45:03.274537       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1216 07:45:03.288751       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 07:45:03.288846       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 07:45:03.289147       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1216 07:45:03.289193       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1216 07:45:03.374798       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1216 07:45:03.389801       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 07:45:03.389733       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Dec 16 07:44:58 pause-375517 kubelet[1308]: I1216 07:44:58.088000    1308 scope.go:117] "RemoveContainer" containerID="6f2801c86394d7c2f2aa5eb96605703ff62f0e8b300e513300a2ca2ebe69abd9"
	Dec 16 07:44:58 pause-375517 kubelet[1308]: E1216 07:44:58.088521    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-375517\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="680da77e9ca9056b3396aa7cb72665b7" pod="kube-system/etcd-pause-375517"
	Dec 16 07:44:58 pause-375517 kubelet[1308]: E1216 07:44:58.088740    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-375517\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="64b36c3ec486377192e1d929a98370b4" pod="kube-system/kube-scheduler-pause-375517"
	Dec 16 07:44:58 pause-375517 kubelet[1308]: E1216 07:44:58.088911    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t4gtq\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="b5cbe5d5-2616-47da-9e2c-3320915cc6a2" pod="kube-system/kube-proxy-t4gtq"
	Dec 16 07:44:58 pause-375517 kubelet[1308]: E1216 07:44:58.089085    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-cmscz\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="c976794b-50e2-4508-a421-d94c7c247cd7" pod="kube-system/kindnet-cmscz"
	Dec 16 07:44:58 pause-375517 kubelet[1308]: E1216 07:44:58.089314    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-375517\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="15cf344781b2c6baaaa5ac04b97ac867" pod="kube-system/kube-controller-manager-pause-375517"
	Dec 16 07:44:58 pause-375517 kubelet[1308]: E1216 07:44:58.089516    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-375517\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="be626bea5116bec1aa3ead058959622e" pod="kube-system/kube-apiserver-pause-375517"
	Dec 16 07:44:58 pause-375517 kubelet[1308]: I1216 07:44:58.103229    1308 scope.go:117] "RemoveContainer" containerID="e6e642813259586fc8af749f164ed270ad6375c625923994b9cd7781a9f840fc"
	Dec 16 07:44:58 pause-375517 kubelet[1308]: E1216 07:44:58.103772    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-92vwf\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="e9e3bf22-e909-407e-ac7a-67a7dbb2a7b9" pod="kube-system/coredns-66bc5c9577-92vwf"
	Dec 16 07:44:58 pause-375517 kubelet[1308]: E1216 07:44:58.103961    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-375517\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="15cf344781b2c6baaaa5ac04b97ac867" pod="kube-system/kube-controller-manager-pause-375517"
	Dec 16 07:44:58 pause-375517 kubelet[1308]: E1216 07:44:58.104130    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-375517\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="be626bea5116bec1aa3ead058959622e" pod="kube-system/kube-apiserver-pause-375517"
	Dec 16 07:44:58 pause-375517 kubelet[1308]: E1216 07:44:58.104299    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-375517\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="680da77e9ca9056b3396aa7cb72665b7" pod="kube-system/etcd-pause-375517"
	Dec 16 07:44:58 pause-375517 kubelet[1308]: E1216 07:44:58.104525    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-375517\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="64b36c3ec486377192e1d929a98370b4" pod="kube-system/kube-scheduler-pause-375517"
	Dec 16 07:44:58 pause-375517 kubelet[1308]: E1216 07:44:58.104732    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t4gtq\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="b5cbe5d5-2616-47da-9e2c-3320915cc6a2" pod="kube-system/kube-proxy-t4gtq"
	Dec 16 07:44:58 pause-375517 kubelet[1308]: E1216 07:44:58.104962    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-cmscz\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="c976794b-50e2-4508-a421-d94c7c247cd7" pod="kube-system/kindnet-cmscz"
	Dec 16 07:45:02 pause-375517 kubelet[1308]: E1216 07:45:02.761515    1308 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-t4gtq\" is forbidden: User \"system:node:pause-375517\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-375517' and this object" podUID="b5cbe5d5-2616-47da-9e2c-3320915cc6a2" pod="kube-system/kube-proxy-t4gtq"
	Dec 16 07:45:02 pause-375517 kubelet[1308]: E1216 07:45:02.761896    1308 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-375517\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-375517' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Dec 16 07:45:02 pause-375517 kubelet[1308]: E1216 07:45:02.762265    1308 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-375517\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-375517' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Dec 16 07:45:02 pause-375517 kubelet[1308]: E1216 07:45:02.875359    1308 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-cmscz\" is forbidden: User \"system:node:pause-375517\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-375517' and this object" podUID="c976794b-50e2-4508-a421-d94c7c247cd7" pod="kube-system/kindnet-cmscz"
	Dec 16 07:45:02 pause-375517 kubelet[1308]: E1216 07:45:02.932712    1308 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-92vwf\" is forbidden: User \"system:node:pause-375517\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-375517' and this object" podUID="e9e3bf22-e909-407e-ac7a-67a7dbb2a7b9" pod="kube-system/coredns-66bc5c9577-92vwf"
	Dec 16 07:45:03 pause-375517 kubelet[1308]: E1216 07:45:03.058028    1308 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-375517\" is forbidden: User \"system:node:pause-375517\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-375517' and this object" podUID="15cf344781b2c6baaaa5ac04b97ac867" pod="kube-system/kube-controller-manager-pause-375517"
	Dec 16 07:45:09 pause-375517 kubelet[1308]: W1216 07:45:09.122602    1308 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 16 07:45:17 pause-375517 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 16 07:45:17 pause-375517 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 16 07:45:17 pause-375517 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-375517 -n pause-375517
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-375517 -n pause-375517: exit status 2 (371.820157ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-375517 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-375517
helpers_test.go:244: (dbg) docker inspect pause-375517:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b52f36bed2a5ba3f7851cb2bbf0891bd89ccb7e8e3da43b736b13c4dd35073da",
	        "Created": "2025-12-16T07:43:34.345225442Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1820978,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T07:43:34.42540524Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/b52f36bed2a5ba3f7851cb2bbf0891bd89ccb7e8e3da43b736b13c4dd35073da/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b52f36bed2a5ba3f7851cb2bbf0891bd89ccb7e8e3da43b736b13c4dd35073da/hostname",
	        "HostsPath": "/var/lib/docker/containers/b52f36bed2a5ba3f7851cb2bbf0891bd89ccb7e8e3da43b736b13c4dd35073da/hosts",
	        "LogPath": "/var/lib/docker/containers/b52f36bed2a5ba3f7851cb2bbf0891bd89ccb7e8e3da43b736b13c4dd35073da/b52f36bed2a5ba3f7851cb2bbf0891bd89ccb7e8e3da43b736b13c4dd35073da-json.log",
	        "Name": "/pause-375517",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-375517:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-375517",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b52f36bed2a5ba3f7851cb2bbf0891bd89ccb7e8e3da43b736b13c4dd35073da",
	                "LowerDir": "/var/lib/docker/overlay2/ba2213d24f55e4130747919301670fa03d0628fdfe4d6f37aec9925637f1f495-init/diff:/var/lib/docker/overlay2/bf9e5e3f04a34ae52d17b5e81aeacb3854428b2bda7b4fcb7e1d86558db759ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ba2213d24f55e4130747919301670fa03d0628fdfe4d6f37aec9925637f1f495/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ba2213d24f55e4130747919301670fa03d0628fdfe4d6f37aec9925637f1f495/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ba2213d24f55e4130747919301670fa03d0628fdfe4d6f37aec9925637f1f495/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-375517",
	                "Source": "/var/lib/docker/volumes/pause-375517/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-375517",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-375517",
	                "name.minikube.sigs.k8s.io": "pause-375517",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c4f5f16529bf662fe426f566996e47397c249a5dfb73afc0d511a0f9b65a7854",
	            "SandboxKey": "/var/run/docker/netns/c4f5f16529bf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34525"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34526"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34529"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34527"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34528"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-375517": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:1e:52:e0:38:e2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1a88b60002a680226ebdf7a1156be8edd29c65d3e362af8ef7f90e358d4dde1f",
	                    "EndpointID": "162541a866c8f784955cd03eeb465d10bd2d910382b8e8a33e5be053be0aa407",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-375517",
	                        "b52f36bed2a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-375517 -n pause-375517
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-375517 -n pause-375517: exit status 2 (363.979279ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p pause-375517 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p pause-375517 logs -n 25: (1.404553344s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                           ARGS                                                                                                            │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-310359 --driver=docker  --container-runtime=crio                                                                                                                                                          │ NoKubernetes-310359       │ jenkins │ v1.37.0 │ 16 Dec 25 07:32 UTC │ 16 Dec 25 07:32 UTC │
	│ ssh     │ -p NoKubernetes-310359 sudo systemctl is-active --quiet service kubelet                                                                                                                                                   │ NoKubernetes-310359       │ jenkins │ v1.37.0 │ 16 Dec 25 07:32 UTC │                     │
	│ delete  │ -p NoKubernetes-310359                                                                                                                                                                                                    │ NoKubernetes-310359       │ jenkins │ v1.37.0 │ 16 Dec 25 07:32 UTC │ 16 Dec 25 07:32 UTC │
	│ start   │ -p cert-expiration-799129 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                    │ cert-expiration-799129    │ jenkins │ v1.37.0 │ 16 Dec 25 07:32 UTC │ 16 Dec 25 07:33 UTC │
	│ ssh     │ force-systemd-flag-583064 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                      │ force-systemd-flag-583064 │ jenkins │ v1.37.0 │ 16 Dec 25 07:32 UTC │ 16 Dec 25 07:32 UTC │
	│ delete  │ -p force-systemd-flag-583064                                                                                                                                                                                              │ force-systemd-flag-583064 │ jenkins │ v1.37.0 │ 16 Dec 25 07:32 UTC │ 16 Dec 25 07:32 UTC │
	│ start   │ -p cert-options-755102 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio │ cert-options-755102       │ jenkins │ v1.37.0 │ 16 Dec 25 07:32 UTC │ 16 Dec 25 07:33 UTC │
	│ ssh     │ cert-options-755102 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                               │ cert-options-755102       │ jenkins │ v1.37.0 │ 16 Dec 25 07:33 UTC │ 16 Dec 25 07:33 UTC │
	│ ssh     │ -p cert-options-755102 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                             │ cert-options-755102       │ jenkins │ v1.37.0 │ 16 Dec 25 07:33 UTC │ 16 Dec 25 07:33 UTC │
	│ delete  │ -p cert-options-755102                                                                                                                                                                                                    │ cert-options-755102       │ jenkins │ v1.37.0 │ 16 Dec 25 07:33 UTC │ 16 Dec 25 07:33 UTC │
	│ start   │ -p running-upgrade-033810 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                                                                                                      │ running-upgrade-033810    │ jenkins │ v1.35.0 │ 16 Dec 25 07:33 UTC │ 16 Dec 25 07:33 UTC │
	│ start   │ -p running-upgrade-033810 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                  │ running-upgrade-033810    │ jenkins │ v1.37.0 │ 16 Dec 25 07:33 UTC │ 16 Dec 25 07:38 UTC │
	│ start   │ -p cert-expiration-799129 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                 │ cert-expiration-799129    │ jenkins │ v1.37.0 │ 16 Dec 25 07:36 UTC │ 16 Dec 25 07:36 UTC │
	│ delete  │ -p cert-expiration-799129                                                                                                                                                                                                 │ cert-expiration-799129    │ jenkins │ v1.37.0 │ 16 Dec 25 07:36 UTC │ 16 Dec 25 07:36 UTC │
	│ start   │ -p kubernetes-upgrade-530870 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                  │ kubernetes-upgrade-530870 │ jenkins │ v1.37.0 │ 16 Dec 25 07:36 UTC │ 16 Dec 25 07:36 UTC │
	│ stop    │ -p kubernetes-upgrade-530870                                                                                                                                                                                              │ kubernetes-upgrade-530870 │ jenkins │ v1.37.0 │ 16 Dec 25 07:36 UTC │ 16 Dec 25 07:37 UTC │
	│ start   │ -p kubernetes-upgrade-530870 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                           │ kubernetes-upgrade-530870 │ jenkins │ v1.37.0 │ 16 Dec 25 07:37 UTC │                     │
	│ delete  │ -p running-upgrade-033810                                                                                                                                                                                                 │ running-upgrade-033810    │ jenkins │ v1.37.0 │ 16 Dec 25 07:38 UTC │ 16 Dec 25 07:38 UTC │
	│ start   │ -p stopped-upgrade-021632 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                                                                                                      │ stopped-upgrade-021632    │ jenkins │ v1.35.0 │ 16 Dec 25 07:38 UTC │ 16 Dec 25 07:38 UTC │
	│ stop    │ stopped-upgrade-021632 stop                                                                                                                                                                                               │ stopped-upgrade-021632    │ jenkins │ v1.35.0 │ 16 Dec 25 07:38 UTC │ 16 Dec 25 07:38 UTC │
	│ start   │ -p stopped-upgrade-021632 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                  │ stopped-upgrade-021632    │ jenkins │ v1.37.0 │ 16 Dec 25 07:38 UTC │ 16 Dec 25 07:43 UTC │
	│ delete  │ -p stopped-upgrade-021632                                                                                                                                                                                                 │ stopped-upgrade-021632    │ jenkins │ v1.37.0 │ 16 Dec 25 07:43 UTC │ 16 Dec 25 07:43 UTC │
	│ start   │ -p pause-375517 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                                                                                                 │ pause-375517              │ jenkins │ v1.37.0 │ 16 Dec 25 07:43 UTC │ 16 Dec 25 07:44 UTC │
	│ start   │ -p pause-375517 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                          │ pause-375517              │ jenkins │ v1.37.0 │ 16 Dec 25 07:44 UTC │ 16 Dec 25 07:45 UTC │
	│ pause   │ -p pause-375517 --alsologtostderr -v=5                                                                                                                                                                                    │ pause-375517              │ jenkins │ v1.37.0 │ 16 Dec 25 07:45 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 07:44:48
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 07:44:48.073375 1823529 out.go:360] Setting OutFile to fd 1 ...
	I1216 07:44:48.073585 1823529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 07:44:48.074004 1823529 out.go:374] Setting ErrFile to fd 2...
	I1216 07:44:48.074044 1823529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 07:44:48.074443 1823529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 07:44:48.076046 1823529 out.go:368] Setting JSON to false
	I1216 07:44:48.077327 1823529 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":37639,"bootTime":1765833449,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1216 07:44:48.077439 1823529 start.go:143] virtualization:  
	I1216 07:44:48.080733 1823529 out.go:179] * [pause-375517] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 07:44:48.084588 1823529 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 07:44:48.084714 1823529 notify.go:221] Checking for updates...
	I1216 07:44:48.090394 1823529 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 07:44:48.093279 1823529 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 07:44:48.096062 1823529 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube
	I1216 07:44:48.099044 1823529 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 07:44:48.102030 1823529 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 07:44:48.105678 1823529 config.go:182] Loaded profile config "pause-375517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:44:48.106334 1823529 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 07:44:48.139693 1823529 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 07:44:48.139814 1823529 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 07:44:48.209525 1823529 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-16 07:44:48.200060043 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 07:44:48.209625 1823529 docker.go:319] overlay module found
	I1216 07:44:48.212847 1823529 out.go:179] * Using the docker driver based on existing profile
	I1216 07:44:48.215837 1823529 start.go:309] selected driver: docker
	I1216 07:44:48.215862 1823529 start.go:927] validating driver "docker" against &{Name:pause-375517 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-375517 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 07:44:48.216005 1823529 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 07:44:48.216133 1823529 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 07:44:48.272891 1823529 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-16 07:44:48.262435736 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 07:44:48.273321 1823529 cni.go:84] Creating CNI manager for ""
	I1216 07:44:48.273385 1823529 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 07:44:48.273438 1823529 start.go:353] cluster config:
	{Name:pause-375517 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-375517 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 07:44:48.276792 1823529 out.go:179] * Starting "pause-375517" primary control-plane node in "pause-375517" cluster
	I1216 07:44:48.279633 1823529 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 07:44:48.282657 1823529 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 07:44:48.285584 1823529 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 07:44:48.285635 1823529 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1216 07:44:48.285645 1823529 cache.go:65] Caching tarball of preloaded images
	I1216 07:44:48.285739 1823529 preload.go:238] Found /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 07:44:48.285751 1823529 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 07:44:48.285896 1823529 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/pause-375517/config.json ...
	I1216 07:44:48.286137 1823529 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 07:44:48.306982 1823529 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 07:44:48.307009 1823529 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 07:44:48.307025 1823529 cache.go:243] Successfully downloaded all kic artifacts
	I1216 07:44:48.307058 1823529 start.go:360] acquireMachinesLock for pause-375517: {Name:mk835939422fc9fc96e0811c1d3d47bbe9b9c1a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 07:44:48.307117 1823529 start.go:364] duration metric: took 36.775µs to acquireMachinesLock for "pause-375517"
	I1216 07:44:48.307142 1823529 start.go:96] Skipping create...Using existing machine configuration
	I1216 07:44:48.307148 1823529 fix.go:54] fixHost starting: 
	I1216 07:44:48.307418 1823529 cli_runner.go:164] Run: docker container inspect pause-375517 --format={{.State.Status}}
	I1216 07:44:48.325101 1823529 fix.go:112] recreateIfNeeded on pause-375517: state=Running err=<nil>
	W1216 07:44:48.325132 1823529 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 07:44:48.328593 1823529 out.go:252] * Updating the running docker "pause-375517" container ...
	I1216 07:44:48.328635 1823529 machine.go:94] provisionDockerMachine start ...
	I1216 07:44:48.328735 1823529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-375517
	I1216 07:44:48.346960 1823529 main.go:143] libmachine: Using SSH client type: native
	I1216 07:44:48.347362 1823529 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34525 <nil> <nil>}
	I1216 07:44:48.347379 1823529 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 07:44:48.484261 1823529 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-375517
	
	I1216 07:44:48.484284 1823529 ubuntu.go:182] provisioning hostname "pause-375517"
	I1216 07:44:48.484373 1823529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-375517
	I1216 07:44:48.503691 1823529 main.go:143] libmachine: Using SSH client type: native
	I1216 07:44:48.504007 1823529 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34525 <nil> <nil>}
	I1216 07:44:48.504017 1823529 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-375517 && echo "pause-375517" | sudo tee /etc/hostname
	I1216 07:44:48.647762 1823529 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-375517
	
	I1216 07:44:48.647841 1823529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-375517
	I1216 07:44:48.673582 1823529 main.go:143] libmachine: Using SSH client type: native
	I1216 07:44:48.673888 1823529 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34525 <nil> <nil>}
	I1216 07:44:48.673904 1823529 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-375517' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-375517/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-375517' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 07:44:48.812845 1823529 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 07:44:48.812913 1823529 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22141-1596013/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-1596013/.minikube}
	I1216 07:44:48.812950 1823529 ubuntu.go:190] setting up certificates
	I1216 07:44:48.812959 1823529 provision.go:84] configureAuth start
	I1216 07:44:48.813026 1823529 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-375517
	I1216 07:44:48.831826 1823529 provision.go:143] copyHostCerts
	I1216 07:44:48.831911 1823529 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem, removing ...
	I1216 07:44:48.831927 1823529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem
	I1216 07:44:48.832001 1823529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.pem (1078 bytes)
	I1216 07:44:48.832108 1823529 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem, removing ...
	I1216 07:44:48.832120 1823529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem
	I1216 07:44:48.832149 1823529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/cert.pem (1123 bytes)
	I1216 07:44:48.832219 1823529 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem, removing ...
	I1216 07:44:48.832230 1823529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem
	I1216 07:44:48.832257 1823529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-1596013/.minikube/key.pem (1675 bytes)
	I1216 07:44:48.832361 1823529 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem org=jenkins.pause-375517 san=[127.0.0.1 192.168.85.2 localhost minikube pause-375517]
	I1216 07:44:49.085236 1823529 provision.go:177] copyRemoteCerts
	I1216 07:44:49.085302 1823529 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 07:44:49.085347 1823529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-375517
	I1216 07:44:49.104018 1823529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34525 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/pause-375517/id_rsa Username:docker}
	I1216 07:44:49.200766 1823529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 07:44:49.219066 1823529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1216 07:44:49.236331 1823529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 07:44:49.254404 1823529 provision.go:87] duration metric: took 441.422399ms to configureAuth
	I1216 07:44:49.254433 1823529 ubuntu.go:206] setting minikube options for container-runtime
	I1216 07:44:49.254661 1823529 config.go:182] Loaded profile config "pause-375517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:44:49.254794 1823529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-375517
	I1216 07:44:49.271376 1823529 main.go:143] libmachine: Using SSH client type: native
	I1216 07:44:49.271683 1823529 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34525 <nil> <nil>}
	I1216 07:44:49.271707 1823529 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 07:44:54.615946 1823529 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 07:44:54.615969 1823529 machine.go:97] duration metric: took 6.28732437s to provisionDockerMachine
	I1216 07:44:54.615982 1823529 start.go:293] postStartSetup for "pause-375517" (driver="docker")
	I1216 07:44:54.615993 1823529 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 07:44:54.616058 1823529 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 07:44:54.616106 1823529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-375517
	I1216 07:44:54.633937 1823529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34525 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/pause-375517/id_rsa Username:docker}
	I1216 07:44:54.740497 1823529 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 07:44:54.743957 1823529 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 07:44:54.743984 1823529 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 07:44:54.743996 1823529 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/addons for local assets ...
	I1216 07:44:54.744051 1823529 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-1596013/.minikube/files for local assets ...
	I1216 07:44:54.744144 1823529 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem -> 15992552.pem in /etc/ssl/certs
	I1216 07:44:54.744257 1823529 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 07:44:54.752079 1823529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 07:44:54.770147 1823529 start.go:296] duration metric: took 154.139956ms for postStartSetup
	I1216 07:44:54.770231 1823529 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 07:44:54.770277 1823529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-375517
	I1216 07:44:54.788497 1823529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34525 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/pause-375517/id_rsa Username:docker}
	I1216 07:44:54.881870 1823529 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 07:44:54.886965 1823529 fix.go:56] duration metric: took 6.579809252s for fixHost
	I1216 07:44:54.886990 1823529 start.go:83] releasing machines lock for "pause-375517", held for 6.579860584s
	I1216 07:44:54.887073 1823529 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-375517
	I1216 07:44:54.903786 1823529 ssh_runner.go:195] Run: cat /version.json
	I1216 07:44:54.903850 1823529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-375517
	I1216 07:44:54.904141 1823529 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 07:44:54.904211 1823529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-375517
	I1216 07:44:54.924117 1823529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34525 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/pause-375517/id_rsa Username:docker}
	I1216 07:44:54.924001 1823529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34525 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/pause-375517/id_rsa Username:docker}
	I1216 07:44:55.018714 1823529 ssh_runner.go:195] Run: systemctl --version
	I1216 07:44:55.108785 1823529 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 07:44:55.155497 1823529 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 07:44:55.160408 1823529 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 07:44:55.160596 1823529 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 07:44:55.169532 1823529 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 07:44:55.169570 1823529 start.go:496] detecting cgroup driver to use...
	I1216 07:44:55.169603 1823529 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 07:44:55.169681 1823529 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 07:44:55.185653 1823529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 07:44:55.200250 1823529 docker.go:218] disabling cri-docker service (if available) ...
	I1216 07:44:55.200327 1823529 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 07:44:55.216454 1823529 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 07:44:55.230222 1823529 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 07:44:55.367656 1823529 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 07:44:55.531876 1823529 docker.go:234] disabling docker service ...
	I1216 07:44:55.531966 1823529 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 07:44:55.547106 1823529 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 07:44:55.559974 1823529 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 07:44:55.694458 1823529 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 07:44:55.822258 1823529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 07:44:55.836572 1823529 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 07:44:55.852139 1823529 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 07:44:55.852234 1823529 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:44:55.861543 1823529 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 07:44:55.861656 1823529 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:44:55.870646 1823529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:44:55.879365 1823529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:44:55.888442 1823529 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 07:44:55.897071 1823529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:44:55.906344 1823529 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:44:55.916350 1823529 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 07:44:55.925140 1823529 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 07:44:55.932830 1823529 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 07:44:55.940253 1823529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 07:44:56.078291 1823529 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 07:44:56.305554 1823529 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 07:44:56.305629 1823529 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 07:44:56.309446 1823529 start.go:564] Will wait 60s for crictl version
	I1216 07:44:56.309509 1823529 ssh_runner.go:195] Run: which crictl
	I1216 07:44:56.312945 1823529 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 07:44:56.338269 1823529 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 07:44:56.338374 1823529 ssh_runner.go:195] Run: crio --version
	I1216 07:44:56.366585 1823529 ssh_runner.go:195] Run: crio --version
	I1216 07:44:56.397796 1823529 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 07:44:56.400805 1823529 cli_runner.go:164] Run: docker network inspect pause-375517 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 07:44:56.417311 1823529 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1216 07:44:56.421223 1823529 kubeadm.go:884] updating cluster {Name:pause-375517 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-375517 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 07:44:56.421376 1823529 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 07:44:56.421436 1823529 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 07:44:56.454947 1823529 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 07:44:56.454972 1823529 crio.go:433] Images already preloaded, skipping extraction
	I1216 07:44:56.455030 1823529 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 07:44:56.480411 1823529 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 07:44:56.480436 1823529 cache_images.go:86] Images are preloaded, skipping loading
	I1216 07:44:56.480451 1823529 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1216 07:44:56.480580 1823529 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-375517 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-375517 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 07:44:56.480667 1823529 ssh_runner.go:195] Run: crio config
	I1216 07:44:56.555939 1823529 cni.go:84] Creating CNI manager for ""
	I1216 07:44:56.555964 1823529 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 07:44:56.555983 1823529 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 07:44:56.556015 1823529 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-375517 NodeName:pause-375517 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 07:44:56.556168 1823529 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-375517"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 07:44:56.556262 1823529 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 07:44:56.564106 1823529 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 07:44:56.564209 1823529 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 07:44:56.571784 1823529 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1216 07:44:56.584620 1823529 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 07:44:56.597347 1823529 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1216 07:44:56.610645 1823529 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1216 07:44:56.614578 1823529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 07:44:56.746429 1823529 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 07:44:56.760652 1823529 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/pause-375517 for IP: 192.168.85.2
	I1216 07:44:56.760676 1823529 certs.go:195] generating shared ca certs ...
	I1216 07:44:56.760693 1823529 certs.go:227] acquiring lock for ca certs: {Name:mkbf72d2e438185e2867d262e148d82e5455cccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:44:56.760829 1823529 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key
	I1216 07:44:56.760879 1823529 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key
	I1216 07:44:56.760892 1823529 certs.go:257] generating profile certs ...
	I1216 07:44:56.760993 1823529 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/pause-375517/client.key
	I1216 07:44:56.761065 1823529 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/pause-375517/apiserver.key.257effa7
	I1216 07:44:56.761112 1823529 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/pause-375517/proxy-client.key
	I1216 07:44:56.761222 1823529 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem (1338 bytes)
	W1216 07:44:56.761265 1823529 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255_empty.pem, impossibly tiny 0 bytes
	I1216 07:44:56.761276 1823529 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 07:44:56.761307 1823529 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/ca.pem (1078 bytes)
	I1216 07:44:56.761340 1823529 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/cert.pem (1123 bytes)
	I1216 07:44:56.761367 1823529 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/key.pem (1675 bytes)
	I1216 07:44:56.761417 1823529 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem (1708 bytes)
	I1216 07:44:56.762034 1823529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 07:44:56.779600 1823529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 07:44:56.797895 1823529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 07:44:56.814757 1823529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 07:44:56.833312 1823529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/pause-375517/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 07:44:56.850941 1823529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/pause-375517/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 07:44:56.869113 1823529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/pause-375517/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 07:44:56.892252 1823529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/pause-375517/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 07:44:56.917906 1823529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/ssl/certs/15992552.pem --> /usr/share/ca-certificates/15992552.pem (1708 bytes)
	I1216 07:44:56.941827 1823529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 07:44:56.964756 1823529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-1596013/.minikube/certs/1599255.pem --> /usr/share/ca-certificates/1599255.pem (1338 bytes)
	I1216 07:44:56.983524 1823529 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 07:44:56.996286 1823529 ssh_runner.go:195] Run: openssl version
	I1216 07:44:57.003826 1823529 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15992552.pem
	I1216 07:44:57.013550 1823529 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15992552.pem /etc/ssl/certs/15992552.pem
	I1216 07:44:57.021712 1823529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15992552.pem
	I1216 07:44:57.025634 1823529 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 06:24 /usr/share/ca-certificates/15992552.pem
	I1216 07:44:57.025732 1823529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15992552.pem
	I1216 07:44:57.066994 1823529 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 07:44:57.074443 1823529 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:44:57.081727 1823529 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 07:44:57.089174 1823529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:44:57.093100 1823529 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:44:57.093168 1823529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 07:44:57.134590 1823529 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 07:44:57.142139 1823529 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1599255.pem
	I1216 07:44:57.149615 1823529 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1599255.pem /etc/ssl/certs/1599255.pem
	I1216 07:44:57.157482 1823529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1599255.pem
	I1216 07:44:57.161122 1823529 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 06:24 /usr/share/ca-certificates/1599255.pem
	I1216 07:44:57.161191 1823529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1599255.pem
	I1216 07:44:57.202523 1823529 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 07:44:57.209955 1823529 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 07:44:57.214016 1823529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 07:44:57.255714 1823529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 07:44:57.296801 1823529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 07:44:57.338358 1823529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 07:44:57.379481 1823529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 07:44:57.420603 1823529 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 07:44:57.462583 1823529 kubeadm.go:401] StartCluster: {Name:pause-375517 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-375517 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 07:44:57.462717 1823529 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 07:44:57.462783 1823529 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 07:44:57.492779 1823529 cri.go:89] found id: "e6e642813259586fc8af749f164ed270ad6375c625923994b9cd7781a9f840fc"
	I1216 07:44:57.492808 1823529 cri.go:89] found id: "6f2801c86394d7c2f2aa5eb96605703ff62f0e8b300e513300a2ca2ebe69abd9"
	I1216 07:44:57.492813 1823529 cri.go:89] found id: "76751aa856240697939bca0a05e7d09ba45b9f61fb6c295aaabb8abcd8159e58"
	I1216 07:44:57.492816 1823529 cri.go:89] found id: "618d84d84bc9252e472478ee25b67f5df79c96360a711e03237ac217587aba39"
	I1216 07:44:57.492819 1823529 cri.go:89] found id: "517fe82d993ee6f5d824214ffe285d9f264597a2c18d89d85e7f4597b139afb3"
	I1216 07:44:57.492822 1823529 cri.go:89] found id: "2463d298843b3b6c96a2c0d31eeb6e502860ca63ff1ab141388cacd59870583f"
	I1216 07:44:57.492826 1823529 cri.go:89] found id: "a1cf99c5084156ca31d04538b1e6cde3c65b6503d4a9ab7f17a0b4226501fd7a"
	I1216 07:44:57.492828 1823529 cri.go:89] found id: ""
	I1216 07:44:57.492878 1823529 ssh_runner.go:195] Run: sudo runc list -f json
	W1216 07:44:57.503902 1823529 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T07:44:57Z" level=error msg="open /run/runc: no such file or directory"
	I1216 07:44:57.503982 1823529 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 07:44:57.512245 1823529 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 07:44:57.512266 1823529 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 07:44:57.512355 1823529 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 07:44:57.520041 1823529 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 07:44:57.520767 1823529 kubeconfig.go:125] found "pause-375517" server: "https://192.168.85.2:8443"
	I1216 07:44:57.521586 1823529 kapi.go:59] client config for pause-375517: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/pause-375517/client.crt", KeyFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/pause-375517/client.key", CAFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 07:44:57.522078 1823529 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 07:44:57.522093 1823529 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1216 07:44:57.522100 1823529 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1216 07:44:57.522104 1823529 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 07:44:57.522111 1823529 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1216 07:44:57.522391 1823529 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 07:44:57.530246 1823529 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1216 07:44:57.530327 1823529 kubeadm.go:602] duration metric: took 18.053919ms to restartPrimaryControlPlane
	I1216 07:44:57.530352 1823529 kubeadm.go:403] duration metric: took 67.777605ms to StartCluster
	I1216 07:44:57.530392 1823529 settings.go:142] acquiring lock: {Name:mk011eec7aa10b3db81dce3dc7edf51f985e2ce2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:44:57.530465 1823529 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 07:44:57.531273 1823529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/kubeconfig: {Name:mk61a8e87d869d27c5acc78145bae6b02a8088a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 07:44:57.531489 1823529 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 07:44:57.531808 1823529 config.go:182] Loaded profile config "pause-375517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:44:57.531856 1823529 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 07:44:57.537625 1823529 out.go:179] * Enabled addons: 
	I1216 07:44:57.537625 1823529 out.go:179] * Verifying Kubernetes components...
	I1216 07:44:57.540520 1823529 addons.go:530] duration metric: took 8.65779ms for enable addons: enabled=[]
	I1216 07:44:57.540568 1823529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 07:44:57.685553 1823529 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 07:44:57.698688 1823529 node_ready.go:35] waiting up to 6m0s for node "pause-375517" to be "Ready" ...
	I1216 07:45:02.974934 1823529 node_ready.go:49] node "pause-375517" is "Ready"
	I1216 07:45:02.974962 1823529 node_ready.go:38] duration metric: took 5.276231055s for node "pause-375517" to be "Ready" ...
	I1216 07:45:02.974976 1823529 api_server.go:52] waiting for apiserver process to appear ...
	I1216 07:45:02.975038 1823529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:45:02.997105 1823529 api_server.go:72] duration metric: took 5.465578703s to wait for apiserver process to appear ...
	I1216 07:45:02.997129 1823529 api_server.go:88] waiting for apiserver healthz status ...
	I1216 07:45:02.997148 1823529 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 07:45:03.210047 1823529 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:45:03.210075 1823529 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:45:03.497463 1823529 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 07:45:03.506009 1823529 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 07:45:03.506160 1823529 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 07:45:03.997828 1823529 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1216 07:45:04.007343 1823529 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1216 07:45:04.008901 1823529 api_server.go:141] control plane version: v1.34.2
	I1216 07:45:04.008939 1823529 api_server.go:131] duration metric: took 1.01180219s to wait for apiserver health ...
	I1216 07:45:04.008952 1823529 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 07:45:04.012716 1823529 system_pods.go:59] 7 kube-system pods found
	I1216 07:45:04.012758 1823529 system_pods.go:61] "coredns-66bc5c9577-92vwf" [e9e3bf22-e909-407e-ac7a-67a7dbb2a7b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 07:45:04.012769 1823529 system_pods.go:61] "etcd-pause-375517" [00690516-0fd6-4a7d-9abb-d21dd49bd0a4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 07:45:04.012775 1823529 system_pods.go:61] "kindnet-cmscz" [c976794b-50e2-4508-a421-d94c7c247cd7] Running
	I1216 07:45:04.012783 1823529 system_pods.go:61] "kube-apiserver-pause-375517" [5a339fa1-1300-4124-a183-16077c517dd3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 07:45:04.012790 1823529 system_pods.go:61] "kube-controller-manager-pause-375517" [c07521e2-9ac8-4116-a985-f7a1664e4006] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 07:45:04.012795 1823529 system_pods.go:61] "kube-proxy-t4gtq" [b5cbe5d5-2616-47da-9e2c-3320915cc6a2] Running
	I1216 07:45:04.012808 1823529 system_pods.go:61] "kube-scheduler-pause-375517" [e9c2ba56-3b59-4bd1-92a9-6ce46072165d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 07:45:04.012813 1823529 system_pods.go:74] duration metric: took 3.856271ms to wait for pod list to return data ...
	I1216 07:45:04.012826 1823529 default_sa.go:34] waiting for default service account to be created ...
	I1216 07:45:04.016010 1823529 default_sa.go:45] found service account: "default"
	I1216 07:45:04.016041 1823529 default_sa.go:55] duration metric: took 3.20732ms for default service account to be created ...
	I1216 07:45:04.016052 1823529 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 07:45:04.019631 1823529 system_pods.go:86] 7 kube-system pods found
	I1216 07:45:04.019670 1823529 system_pods.go:89] "coredns-66bc5c9577-92vwf" [e9e3bf22-e909-407e-ac7a-67a7dbb2a7b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 07:45:04.019680 1823529 system_pods.go:89] "etcd-pause-375517" [00690516-0fd6-4a7d-9abb-d21dd49bd0a4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 07:45:04.019687 1823529 system_pods.go:89] "kindnet-cmscz" [c976794b-50e2-4508-a421-d94c7c247cd7] Running
	I1216 07:45:04.019695 1823529 system_pods.go:89] "kube-apiserver-pause-375517" [5a339fa1-1300-4124-a183-16077c517dd3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 07:45:04.019703 1823529 system_pods.go:89] "kube-controller-manager-pause-375517" [c07521e2-9ac8-4116-a985-f7a1664e4006] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 07:45:04.019710 1823529 system_pods.go:89] "kube-proxy-t4gtq" [b5cbe5d5-2616-47da-9e2c-3320915cc6a2] Running
	I1216 07:45:04.019730 1823529 system_pods.go:89] "kube-scheduler-pause-375517" [e9c2ba56-3b59-4bd1-92a9-6ce46072165d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 07:45:04.019738 1823529 system_pods.go:126] duration metric: took 3.680958ms to wait for k8s-apps to be running ...
	I1216 07:45:04.019750 1823529 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 07:45:04.019817 1823529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 07:45:04.033755 1823529 system_svc.go:56] duration metric: took 13.995496ms WaitForService to wait for kubelet
	I1216 07:45:04.033829 1823529 kubeadm.go:587] duration metric: took 6.502307187s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 07:45:04.033876 1823529 node_conditions.go:102] verifying NodePressure condition ...
	I1216 07:45:04.037691 1823529 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1216 07:45:04.037725 1823529 node_conditions.go:123] node cpu capacity is 2
	I1216 07:45:04.037738 1823529 node_conditions.go:105] duration metric: took 3.833329ms to run NodePressure ...
	I1216 07:45:04.037771 1823529 start.go:242] waiting for startup goroutines ...
	I1216 07:45:04.037786 1823529 start.go:247] waiting for cluster config update ...
	I1216 07:45:04.037798 1823529 start.go:256] writing updated cluster config ...
	I1216 07:45:04.038143 1823529 ssh_runner.go:195] Run: rm -f paused
	I1216 07:45:04.041934 1823529 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 07:45:04.042557 1823529 kapi.go:59] client config for pause-375517: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/pause-375517/client.crt", KeyFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/pause-375517/client.key", CAFile:"/home/jenkins/minikube-integration/22141-1596013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 07:45:04.045966 1823529 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-92vwf" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 07:45:06.051889 1823529 pod_ready.go:104] pod "coredns-66bc5c9577-92vwf" is not "Ready", error: <nil>
	W1216 07:45:08.052575 1823529 pod_ready.go:104] pod "coredns-66bc5c9577-92vwf" is not "Ready", error: <nil>
	I1216 07:45:10.551685 1823529 pod_ready.go:94] pod "coredns-66bc5c9577-92vwf" is "Ready"
	I1216 07:45:10.551719 1823529 pod_ready.go:86] duration metric: took 6.50572704s for pod "coredns-66bc5c9577-92vwf" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:45:10.554435 1823529 pod_ready.go:83] waiting for pod "etcd-pause-375517" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 07:45:12.560436 1823529 pod_ready.go:104] pod "etcd-pause-375517" is not "Ready", error: <nil>
	W1216 07:45:14.560910 1823529 pod_ready.go:104] pod "etcd-pause-375517" is not "Ready", error: <nil>
	I1216 07:45:16.060641 1823529 pod_ready.go:94] pod "etcd-pause-375517" is "Ready"
	I1216 07:45:16.060667 1823529 pod_ready.go:86] duration metric: took 5.50620337s for pod "etcd-pause-375517" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:45:16.068855 1823529 pod_ready.go:83] waiting for pod "kube-apiserver-pause-375517" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:45:16.075257 1823529 pod_ready.go:94] pod "kube-apiserver-pause-375517" is "Ready"
	I1216 07:45:16.075297 1823529 pod_ready.go:86] duration metric: took 6.410997ms for pod "kube-apiserver-pause-375517" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:45:16.077840 1823529 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-375517" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:45:16.082286 1823529 pod_ready.go:94] pod "kube-controller-manager-pause-375517" is "Ready"
	I1216 07:45:16.082311 1823529 pod_ready.go:86] duration metric: took 4.444371ms for pod "kube-controller-manager-pause-375517" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:45:16.084831 1823529 pod_ready.go:83] waiting for pod "kube-proxy-t4gtq" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:45:16.257416 1823529 pod_ready.go:94] pod "kube-proxy-t4gtq" is "Ready"
	I1216 07:45:16.257449 1823529 pod_ready.go:86] duration metric: took 172.590813ms for pod "kube-proxy-t4gtq" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:45:16.457377 1823529 pod_ready.go:83] waiting for pod "kube-scheduler-pause-375517" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:45:16.857449 1823529 pod_ready.go:94] pod "kube-scheduler-pause-375517" is "Ready"
	I1216 07:45:16.857480 1823529 pod_ready.go:86] duration metric: took 400.015387ms for pod "kube-scheduler-pause-375517" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 07:45:16.857494 1823529 pod_ready.go:40] duration metric: took 12.815527566s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 07:45:16.916419 1823529 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1216 07:45:16.919675 1823529 out.go:179] * Done! kubectl is now configured to use "pause-375517" cluster and "default" namespace by default
	I1216 07:45:18.320076 1798136 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001193351s
	I1216 07:45:18.320115 1798136 kubeadm.go:319] 
	I1216 07:45:18.320179 1798136 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 07:45:18.320232 1798136 kubeadm.go:319] 	- The kubelet is not running
	I1216 07:45:18.320356 1798136 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 07:45:18.320369 1798136 kubeadm.go:319] 
	I1216 07:45:18.320516 1798136 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 07:45:18.320565 1798136 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 07:45:18.320600 1798136 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 07:45:18.320608 1798136 kubeadm.go:319] 
	I1216 07:45:18.325958 1798136 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 07:45:18.326491 1798136 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 07:45:18.326621 1798136 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 07:45:18.326926 1798136 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 07:45:18.326936 1798136 kubeadm.go:319] 
	I1216 07:45:18.327017 1798136 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1216 07:45:18.327166 1798136 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001193351s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 07:45:18.327284 1798136 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 07:45:18.754392 1798136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 07:45:18.772231 1798136 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 07:45:18.772294 1798136 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 07:45:18.783699 1798136 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 07:45:18.783726 1798136 kubeadm.go:158] found existing configuration files:
	
	I1216 07:45:18.783783 1798136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 07:45:18.793812 1798136 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 07:45:18.793878 1798136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 07:45:18.802961 1798136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 07:45:18.812328 1798136 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 07:45:18.812395 1798136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 07:45:18.821315 1798136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 07:45:18.832420 1798136 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 07:45:18.832563 1798136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 07:45:18.842333 1798136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 07:45:18.852116 1798136 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 07:45:18.852185 1798136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 07:45:18.860819 1798136 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 07:45:18.915421 1798136 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 07:45:18.915483 1798136 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 07:45:19.051011 1798136 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 07:45:19.051083 1798136 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 07:45:19.051119 1798136 kubeadm.go:319] OS: Linux
	I1216 07:45:19.051164 1798136 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 07:45:19.051212 1798136 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 07:45:19.051259 1798136 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 07:45:19.051307 1798136 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 07:45:19.051355 1798136 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 07:45:19.051403 1798136 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 07:45:19.051448 1798136 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 07:45:19.051496 1798136 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 07:45:19.051542 1798136 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 07:45:19.142528 1798136 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 07:45:19.143196 1798136 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 07:45:19.143363 1798136 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 07:45:19.156893 1798136 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 07:45:19.160253 1798136 out.go:252]   - Generating certificates and keys ...
	I1216 07:45:19.160353 1798136 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 07:45:19.160426 1798136 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 07:45:19.160538 1798136 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 07:45:19.160602 1798136 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 07:45:19.160681 1798136 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 07:45:19.160739 1798136 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 07:45:19.160806 1798136 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 07:45:19.160871 1798136 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 07:45:19.161229 1798136 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 07:45:19.161786 1798136 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 07:45:19.162232 1798136 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 07:45:19.162458 1798136 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 07:45:19.411128 1798136 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 07:45:19.797290 1798136 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 07:45:20.144939 1798136 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 07:45:20.919415 1798136 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 07:45:20.976842 1798136 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 07:45:20.979794 1798136 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 07:45:20.985045 1798136 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Dec 16 07:44:58 pause-375517 crio[2070]: time="2025-12-16T07:44:58.260071225Z" level=info msg="Started container" PID=2374 containerID=e10fa0b3d14a2100bd2a09bb4836e34a4acbec6c8e118a3af3a2450450d66d20 description=kube-system/kube-scheduler-pause-375517/kube-scheduler id=1a7f5956-fb90-4a1c-8575-0a512ef296c3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0dde12892f826f842f45ba837580b02656d2c8b50a2ac260c648c8f9af74410f
	Dec 16 07:44:58 pause-375517 crio[2070]: time="2025-12-16T07:44:58.265694144Z" level=info msg="Created container 546b094189b407e12be0ffc6c3a12d42b955bed31b5fc2d5b8d7bf597ce67cb2: kube-system/etcd-pause-375517/etcd" id=f2eae53a-11f0-455a-8c99-a95b1d4e7615 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 07:44:58 pause-375517 crio[2070]: time="2025-12-16T07:44:58.267968334Z" level=info msg="Starting container: 546b094189b407e12be0ffc6c3a12d42b955bed31b5fc2d5b8d7bf597ce67cb2" id=3274bde2-420c-4d4f-ab64-47d191b5f737 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 07:44:58 pause-375517 crio[2070]: time="2025-12-16T07:44:58.276077245Z" level=info msg="Started container" PID=2381 containerID=546b094189b407e12be0ffc6c3a12d42b955bed31b5fc2d5b8d7bf597ce67cb2 description=kube-system/etcd-pause-375517/etcd id=3274bde2-420c-4d4f-ab64-47d191b5f737 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3f6124b2ebb5213ec462756693ee7757731a3f434da27f430335852b0c001eac
	Dec 16 07:44:58 pause-375517 crio[2070]: time="2025-12-16T07:44:58.298970074Z" level=info msg="Created container 54832596223e3df0d47d266141e625d237f37c3dab9cb50e79333493b8497255: kube-system/kindnet-cmscz/kindnet-cni" id=3af3a3a8-7c93-4e6d-b58e-12f8a6013edc name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 07:44:58 pause-375517 crio[2070]: time="2025-12-16T07:44:58.299864919Z" level=info msg="Starting container: 54832596223e3df0d47d266141e625d237f37c3dab9cb50e79333493b8497255" id=621910e7-d15a-4e5f-b221-9c67b28a62c6 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 07:44:58 pause-375517 crio[2070]: time="2025-12-16T07:44:58.306394046Z" level=info msg="Started container" PID=2397 containerID=54832596223e3df0d47d266141e625d237f37c3dab9cb50e79333493b8497255 description=kube-system/kindnet-cmscz/kindnet-cni id=621910e7-d15a-4e5f-b221-9c67b28a62c6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=346db1de4bb636ea82d10111e582b881ca1a7df62780833b755a35939bd6f09f
	Dec 16 07:44:58 pause-375517 crio[2070]: time="2025-12-16T07:44:58.378076939Z" level=info msg="Created container 4f3eca46adabd7173fb72cdd4291f5038016ba3595d2347359993260c0559ca1: kube-system/kube-proxy-t4gtq/kube-proxy" id=2f0cef4b-bf2a-44cf-952b-18de59bc303e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 07:44:58 pause-375517 crio[2070]: time="2025-12-16T07:44:58.378991911Z" level=info msg="Starting container: 4f3eca46adabd7173fb72cdd4291f5038016ba3595d2347359993260c0559ca1" id=a920dafa-9a00-43ea-ab40-df194f9c1fb3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 07:44:58 pause-375517 crio[2070]: time="2025-12-16T07:44:58.381799932Z" level=info msg="Started container" PID=2389 containerID=4f3eca46adabd7173fb72cdd4291f5038016ba3595d2347359993260c0559ca1 description=kube-system/kube-proxy-t4gtq/kube-proxy id=a920dafa-9a00-43ea-ab40-df194f9c1fb3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=56505329b3ab77977b95542cfc28bc98dd8247b6a161b0f6d78ebaef52aedf7a
	Dec 16 07:45:08 pause-375517 crio[2070]: time="2025-12-16T07:45:08.636724129Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 16 07:45:08 pause-375517 crio[2070]: time="2025-12-16T07:45:08.640378124Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 16 07:45:08 pause-375517 crio[2070]: time="2025-12-16T07:45:08.640586832Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 16 07:45:08 pause-375517 crio[2070]: time="2025-12-16T07:45:08.640627522Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 16 07:45:08 pause-375517 crio[2070]: time="2025-12-16T07:45:08.643908492Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 16 07:45:08 pause-375517 crio[2070]: time="2025-12-16T07:45:08.643939705Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 16 07:45:08 pause-375517 crio[2070]: time="2025-12-16T07:45:08.64396158Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 16 07:45:08 pause-375517 crio[2070]: time="2025-12-16T07:45:08.647055274Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 16 07:45:08 pause-375517 crio[2070]: time="2025-12-16T07:45:08.647089679Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 16 07:45:08 pause-375517 crio[2070]: time="2025-12-16T07:45:08.647112752Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 16 07:45:08 pause-375517 crio[2070]: time="2025-12-16T07:45:08.650156952Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 16 07:45:08 pause-375517 crio[2070]: time="2025-12-16T07:45:08.650192472Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 16 07:45:08 pause-375517 crio[2070]: time="2025-12-16T07:45:08.650217974Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 16 07:45:08 pause-375517 crio[2070]: time="2025-12-16T07:45:08.653408925Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 16 07:45:08 pause-375517 crio[2070]: time="2025-12-16T07:45:08.653445102Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	54832596223e3       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   24 seconds ago       Running             kindnet-cni               1                   346db1de4bb63       kindnet-cmscz                          kube-system
	4f3eca46adabd       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   24 seconds ago       Running             kube-proxy                1                   56505329b3ab7       kube-proxy-t4gtq                       kube-system
	6f71b54f1ba1c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   24 seconds ago       Running             coredns                   1                   a3b997adb651d       coredns-66bc5c9577-92vwf               kube-system
	546b094189b40       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   24 seconds ago       Running             etcd                      1                   3f6124b2ebb52       etcd-pause-375517                      kube-system
	e10fa0b3d14a2       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   24 seconds ago       Running             kube-scheduler            1                   0dde12892f826       kube-scheduler-pause-375517            kube-system
	8a86aee515e65       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   24 seconds ago       Running             kube-apiserver            1                   74979dd3fad13       kube-apiserver-pause-375517            kube-system
	ca7b3191cff4e       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   24 seconds ago       Running             kube-controller-manager   1                   eb6970570e113       kube-controller-manager-pause-375517   kube-system
	e6e6428132595       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   37 seconds ago       Exited              coredns                   0                   a3b997adb651d       coredns-66bc5c9577-92vwf               kube-system
	6f2801c86394d       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   About a minute ago   Exited              kube-proxy                0                   56505329b3ab7       kube-proxy-t4gtq                       kube-system
	76751aa856240       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   346db1de4bb63       kindnet-cmscz                          kube-system
	618d84d84bc92       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   About a minute ago   Exited              kube-apiserver            0                   74979dd3fad13       kube-apiserver-pause-375517            kube-system
	517fe82d993ee       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   About a minute ago   Exited              kube-controller-manager   0                   eb6970570e113       kube-controller-manager-pause-375517   kube-system
	2463d298843b3       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   About a minute ago   Exited              kube-scheduler            0                   0dde12892f826       kube-scheduler-pause-375517            kube-system
	a1cf99c508415       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   About a minute ago   Exited              etcd                      0                   3f6124b2ebb52       etcd-pause-375517                      kube-system
	
	
	==> coredns [6f71b54f1ba1c65db4deac9b06dfb0d64fce05e51d14ce123aa0eb55cc857a8a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51590 - 50993 "HINFO IN 7267632414797955180.1726797191412746475. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012659252s
	
	
	==> coredns [e6e642813259586fc8af749f164ed270ad6375c625923994b9cd7781a9f840fc] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46134 - 26844 "HINFO IN 1827242947691164020.8591283332183659217. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013998745s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-375517
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-375517
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f
	                    minikube.k8s.io/name=pause-375517
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T07_43_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 07:43:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-375517
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 07:45:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 07:45:00 +0000   Tue, 16 Dec 2025 07:43:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 07:45:00 +0000   Tue, 16 Dec 2025 07:43:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 07:45:00 +0000   Tue, 16 Dec 2025 07:43:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 07:45:00 +0000   Tue, 16 Dec 2025 07:44:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-375517
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                a4719513-6662-412e-8570-46ee1b2cab2a
	  Boot ID:                    c02b8f3a-b639-46a9-b38c-18c198a7a8c0
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-92vwf                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     79s
	  kube-system                 etcd-pause-375517                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         83s
	  kube-system                 kindnet-cmscz                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      79s
	  kube-system                 kube-apiserver-pause-375517             250m (12%)    0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-controller-manager-pause-375517    200m (10%)    0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-proxy-t4gtq                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-scheduler-pause-375517             100m (5%)     0 (0%)      0 (0%)           0 (0%)         83s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 78s                kube-proxy       
	  Normal   Starting                 19s                kube-proxy       
	  Warning  CgroupV1                 92s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  92s (x8 over 92s)  kubelet          Node pause-375517 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    92s (x8 over 92s)  kubelet          Node pause-375517 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     92s (x8 over 92s)  kubelet          Node pause-375517 status is now: NodeHasSufficientPID
	  Normal   Starting                 84s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 84s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  83s                kubelet          Node pause-375517 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    83s                kubelet          Node pause-375517 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     83s                kubelet          Node pause-375517 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           80s                node-controller  Node pause-375517 event: Registered Node pause-375517 in Controller
	  Normal   NodeReady                37s                kubelet          Node pause-375517 status is now: NodeReady
	  Normal   RegisteredNode           16s                node-controller  Node pause-375517 event: Registered Node pause-375517 in Controller
	
	
	==> dmesg <==
	[  +2.815105] overlayfs: idmapped layers are currently not supported
	[Dec16 07:06] overlayfs: idmapped layers are currently not supported
	[Dec16 07:10] overlayfs: idmapped layers are currently not supported
	[Dec16 07:12] overlayfs: idmapped layers are currently not supported
	[Dec16 07:17] overlayfs: idmapped layers are currently not supported
	[ +33.022046] overlayfs: idmapped layers are currently not supported
	[Dec16 07:18] overlayfs: idmapped layers are currently not supported
	[Dec16 07:19] overlayfs: idmapped layers are currently not supported
	[Dec16 07:20] overlayfs: idmapped layers are currently not supported
	[Dec16 07:22] overlayfs: idmapped layers are currently not supported
	[Dec16 07:23] overlayfs: idmapped layers are currently not supported
	[  +6.617945] overlayfs: idmapped layers are currently not supported
	[ +47.625208] overlayfs: idmapped layers are currently not supported
	[Dec16 07:24] overlayfs: idmapped layers are currently not supported
	[Dec16 07:25] overlayfs: idmapped layers are currently not supported
	[ +25.916657] overlayfs: idmapped layers are currently not supported
	[Dec16 07:26] overlayfs: idmapped layers are currently not supported
	[Dec16 07:27] overlayfs: idmapped layers are currently not supported
	[Dec16 07:29] overlayfs: idmapped layers are currently not supported
	[Dec16 07:31] overlayfs: idmapped layers are currently not supported
	[Dec16 07:32] overlayfs: idmapped layers are currently not supported
	[ +24.023346] overlayfs: idmapped layers are currently not supported
	[Dec16 07:33] overlayfs: idmapped layers are currently not supported
	[Dec16 07:36] overlayfs: idmapped layers are currently not supported
	[Dec16 07:43] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [546b094189b407e12be0ffc6c3a12d42b955bed31b5fc2d5b8d7bf597ce67cb2] <==
	{"level":"warn","ts":"2025-12-16T07:45:00.833847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:00.866461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:00.908971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:00.954261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:00.980177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:01.024715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:01.060090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:01.083289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:01.156238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:01.171370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:01.206028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:01.242784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:01.271211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:01.290420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:01.325297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:01.375469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:01.389425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:01.426620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:01.497424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:01.515920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:01.598096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:01.625231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:01.670275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:01.742816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:45:01.975166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39058","server-name":"","error":"EOF"}
	
	
	==> etcd [a1cf99c5084156ca31d04538b1e6cde3c65b6503d4a9ab7f17a0b4226501fd7a] <==
	{"level":"warn","ts":"2025-12-16T07:43:54.624094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:43:54.645603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:43:54.665176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:43:54.695579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:43:54.722458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:43:54.765283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T07:43:54.857844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58788","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-16T07:44:49.429830Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-16T07:44:49.429891Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-375517","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-12-16T07:44:49.429980Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-16T07:44:49.710375Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-16T07:44:49.710490Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-16T07:44:49.710513Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-12-16T07:44:49.710547Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-16T07:44:49.710632Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-16T07:44:49.710671Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-16T07:44:49.710680Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-16T07:44:49.710693Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-16T07:44:49.710770Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-16T07:44:49.710822Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-16T07:44:49.710863Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-16T07:44:49.713641Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-12-16T07:44:49.713728Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-16T07:44:49.713759Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-16T07:44:49.713767Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-375517","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 07:45:23 up 10:27,  0 user,  load average: 1.66, 1.72, 1.79
	Linux pause-375517 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [54832596223e3df0d47d266141e625d237f37c3dab9cb50e79333493b8497255] <==
	I1216 07:44:58.434727       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 07:44:58.434956       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1216 07:44:58.435085       1 main.go:148] setting mtu 1500 for CNI 
	I1216 07:44:58.435097       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 07:44:58.435110       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T07:44:58Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 07:44:58.636732       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 07:44:58.636757       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 07:44:58.636766       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 07:44:58.637466       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1216 07:45:03.240711       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 07:45:03.240821       1 metrics.go:72] Registering metrics
	I1216 07:45:03.240900       1 controller.go:711] "Syncing nftables rules"
	I1216 07:45:08.636265       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1216 07:45:08.636345       1 main.go:301] handling current node
	I1216 07:45:18.637458       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1216 07:45:18.637613       1 main.go:301] handling current node
	
	
	==> kindnet [76751aa856240697939bca0a05e7d09ba45b9f61fb6c295aaabb8abcd8159e58] <==
	I1216 07:44:04.529769       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 07:44:04.530032       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1216 07:44:04.530156       1 main.go:148] setting mtu 1500 for CNI 
	I1216 07:44:04.530169       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 07:44:04.530187       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T07:44:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 07:44:04.727577       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 07:44:04.727690       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 07:44:04.727700       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 07:44:04.733296       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1216 07:44:34.727907       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1216 07:44:34.728050       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1216 07:44:34.729102       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1216 07:44:34.733429       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1216 07:44:36.028339       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 07:44:36.028390       1 metrics.go:72] Registering metrics
	I1216 07:44:36.028489       1 controller.go:711] "Syncing nftables rules"
	I1216 07:44:44.732362       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1216 07:44:44.732421       1 main.go:301] handling current node
	
	
	==> kube-apiserver [618d84d84bc9252e472478ee25b67f5df79c96360a711e03237ac217587aba39] <==
	W1216 07:44:49.453479       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.453522       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.453546       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.453570       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.453595       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.453620       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.453647       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.453673       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.453731       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.453756       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.453781       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.453808       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.453833       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.453856       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.453880       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.455439       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.455470       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.455498       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.455524       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.455562       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.455591       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.455616       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.455641       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.455668       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 07:44:49.452750       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [8a86aee515e658b93fc7d10adcd6891a6e4eba14453f9747b4a80a7a361f9266] <==
	I1216 07:45:03.136791       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1216 07:45:03.137744       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1216 07:45:03.156684       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1216 07:45:03.164567       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1216 07:45:03.164617       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1216 07:45:03.164675       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1216 07:45:03.175503       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1216 07:45:03.175542       1 policy_source.go:240] refreshing policies
	I1216 07:45:03.181756       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1216 07:45:03.182858       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1216 07:45:03.183062       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1216 07:45:03.183378       1 aggregator.go:171] initial CRD sync complete...
	I1216 07:45:03.183399       1 autoregister_controller.go:144] Starting autoregister controller
	I1216 07:45:03.183405       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1216 07:45:03.183412       1 cache.go:39] Caches are synced for autoregister controller
	I1216 07:45:03.194344       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 07:45:03.199169       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 07:45:03.213490       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E1216 07:45:03.221733       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1216 07:45:03.714007       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1216 07:45:04.936713       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1216 07:45:06.338563       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1216 07:45:06.576134       1 controller.go:667] quota admission added evaluator for: endpoints
	I1216 07:45:06.628012       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1216 07:45:06.731420       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [517fe82d993ee6f5d824214ffe285d9f264597a2c18d89d85e7f4597b139afb3] <==
	I1216 07:44:02.762695       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1216 07:44:02.763132       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1216 07:44:02.763470       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1216 07:44:02.765494       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1216 07:44:02.765582       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1216 07:44:02.767120       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1216 07:44:02.768397       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1216 07:44:02.769642       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1216 07:44:02.769702       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1216 07:44:02.769729       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1216 07:44:02.769735       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1216 07:44:02.769740       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1216 07:44:02.771850       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 07:44:02.778045       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1216 07:44:02.778137       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1216 07:44:02.778323       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 07:44:02.778459       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1216 07:44:02.779956       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-375517" podCIDRs=["10.244.0.0/24"]
	I1216 07:44:02.787358       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 07:44:02.792795       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1216 07:44:02.801666       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1216 07:44:02.810696       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 07:44:02.810720       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1216 07:44:02.810729       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1216 07:44:47.722605       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [ca7b3191cff4e83a60f9aac1be7a6876b73f4e2f8e10d231f2aa6d86edad73c5] <==
	I1216 07:45:06.370792       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 07:45:06.371353       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1216 07:45:06.371393       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1216 07:45:06.371094       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1216 07:45:06.371113       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1216 07:45:06.371123       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1216 07:45:06.372291       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1216 07:45:06.373117       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1216 07:45:06.373206       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1216 07:45:06.373253       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1216 07:45:06.373285       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1216 07:45:06.371165       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1216 07:45:06.371173       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1216 07:45:06.374267       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 07:45:06.380582       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1216 07:45:06.380749       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1216 07:45:06.380796       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1216 07:45:06.380782       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1216 07:45:06.383550       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1216 07:45:06.386674       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1216 07:45:06.388970       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1216 07:45:06.391412       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1216 07:45:06.394962       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1216 07:45:06.398111       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1216 07:45:06.400663       1 shared_informer.go:356] "Caches are synced" controller="GC"
	
	
	==> kube-proxy [4f3eca46adabd7173fb72cdd4291f5038016ba3595d2347359993260c0559ca1] <==
	I1216 07:45:01.974003       1 server_linux.go:53] "Using iptables proxy"
	I1216 07:45:03.232697       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1216 07:45:03.335188       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 07:45:03.335250       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1216 07:45:03.335374       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 07:45:03.372772       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 07:45:03.372908       1 server_linux.go:132] "Using iptables Proxier"
	I1216 07:45:03.379002       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 07:45:03.379383       1 server.go:527] "Version info" version="v1.34.2"
	I1216 07:45:03.379612       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 07:45:03.385035       1 config.go:200] "Starting service config controller"
	I1216 07:45:03.385121       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 07:45:03.385178       1 config.go:106] "Starting endpoint slice config controller"
	I1216 07:45:03.385207       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 07:45:03.385270       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 07:45:03.385298       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 07:45:03.394394       1 config.go:309] "Starting node config controller"
	I1216 07:45:03.394460       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 07:45:03.394470       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 07:45:03.485905       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 07:45:03.485899       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1216 07:45:03.485946       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [6f2801c86394d7c2f2aa5eb96605703ff62f0e8b300e513300a2ca2ebe69abd9] <==
	I1216 07:44:04.510965       1 server_linux.go:53] "Using iptables proxy"
	I1216 07:44:04.600434       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1216 07:44:04.700857       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 07:44:04.700897       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1216 07:44:04.700970       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 07:44:04.824232       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 07:44:04.824365       1 server_linux.go:132] "Using iptables Proxier"
	I1216 07:44:04.828676       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 07:44:04.829148       1 server.go:527] "Version info" version="v1.34.2"
	I1216 07:44:04.829211       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 07:44:04.830885       1 config.go:200] "Starting service config controller"
	I1216 07:44:04.830913       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 07:44:04.830930       1 config.go:106] "Starting endpoint slice config controller"
	I1216 07:44:04.830934       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 07:44:04.830945       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 07:44:04.830949       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 07:44:04.831619       1 config.go:309] "Starting node config controller"
	I1216 07:44:04.831640       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 07:44:04.831647       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 07:44:04.931382       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1216 07:44:04.931513       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 07:44:04.931563       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2463d298843b3b6c96a2c0d31eeb6e502860ca63ff1ab141388cacd59870583f] <==
	E1216 07:43:56.887123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1216 07:43:56.887296       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1216 07:43:56.887397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1216 07:43:56.887502       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1216 07:43:56.887674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1216 07:43:56.887886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1216 07:43:56.888018       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1216 07:43:56.888137       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1216 07:43:56.888455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1216 07:43:56.888673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1216 07:43:56.888790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1216 07:43:56.888883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1216 07:43:56.889547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1216 07:43:56.890519       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1216 07:43:56.890649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1216 07:43:56.890705       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1216 07:43:56.890748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1216 07:43:56.890948       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1216 07:43:57.770537       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 07:44:49.429317       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1216 07:44:49.429347       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1216 07:44:49.429366       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1216 07:44:49.429395       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 07:44:49.429580       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1216 07:44:49.429595       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e10fa0b3d14a2100bd2a09bb4836e34a4acbec6c8e118a3af3a2450450d66d20] <==
	I1216 07:45:02.130810       1 serving.go:386] Generated self-signed cert in-memory
	I1216 07:45:03.264190       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1216 07:45:03.264439       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 07:45:03.274219       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1216 07:45:03.274433       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1216 07:45:03.274482       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1216 07:45:03.274537       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1216 07:45:03.288751       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 07:45:03.288846       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 07:45:03.289147       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1216 07:45:03.289193       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1216 07:45:03.374798       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1216 07:45:03.389801       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 07:45:03.389733       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Dec 16 07:44:58 pause-375517 kubelet[1308]: I1216 07:44:58.088000    1308 scope.go:117] "RemoveContainer" containerID="6f2801c86394d7c2f2aa5eb96605703ff62f0e8b300e513300a2ca2ebe69abd9"
	Dec 16 07:44:58 pause-375517 kubelet[1308]: E1216 07:44:58.088521    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-375517\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="680da77e9ca9056b3396aa7cb72665b7" pod="kube-system/etcd-pause-375517"
	Dec 16 07:44:58 pause-375517 kubelet[1308]: E1216 07:44:58.088740    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-375517\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="64b36c3ec486377192e1d929a98370b4" pod="kube-system/kube-scheduler-pause-375517"
	Dec 16 07:44:58 pause-375517 kubelet[1308]: E1216 07:44:58.088911    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t4gtq\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="b5cbe5d5-2616-47da-9e2c-3320915cc6a2" pod="kube-system/kube-proxy-t4gtq"
	Dec 16 07:44:58 pause-375517 kubelet[1308]: E1216 07:44:58.089085    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-cmscz\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="c976794b-50e2-4508-a421-d94c7c247cd7" pod="kube-system/kindnet-cmscz"
	Dec 16 07:44:58 pause-375517 kubelet[1308]: E1216 07:44:58.089314    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-375517\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="15cf344781b2c6baaaa5ac04b97ac867" pod="kube-system/kube-controller-manager-pause-375517"
	Dec 16 07:44:58 pause-375517 kubelet[1308]: E1216 07:44:58.089516    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-375517\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="be626bea5116bec1aa3ead058959622e" pod="kube-system/kube-apiserver-pause-375517"
	Dec 16 07:44:58 pause-375517 kubelet[1308]: I1216 07:44:58.103229    1308 scope.go:117] "RemoveContainer" containerID="e6e642813259586fc8af749f164ed270ad6375c625923994b9cd7781a9f840fc"
	Dec 16 07:44:58 pause-375517 kubelet[1308]: E1216 07:44:58.103772    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-92vwf\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="e9e3bf22-e909-407e-ac7a-67a7dbb2a7b9" pod="kube-system/coredns-66bc5c9577-92vwf"
	Dec 16 07:44:58 pause-375517 kubelet[1308]: E1216 07:44:58.103961    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-375517\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="15cf344781b2c6baaaa5ac04b97ac867" pod="kube-system/kube-controller-manager-pause-375517"
	Dec 16 07:44:58 pause-375517 kubelet[1308]: E1216 07:44:58.104130    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-375517\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="be626bea5116bec1aa3ead058959622e" pod="kube-system/kube-apiserver-pause-375517"
	Dec 16 07:44:58 pause-375517 kubelet[1308]: E1216 07:44:58.104299    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-375517\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="680da77e9ca9056b3396aa7cb72665b7" pod="kube-system/etcd-pause-375517"
	Dec 16 07:44:58 pause-375517 kubelet[1308]: E1216 07:44:58.104525    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-375517\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="64b36c3ec486377192e1d929a98370b4" pod="kube-system/kube-scheduler-pause-375517"
	Dec 16 07:44:58 pause-375517 kubelet[1308]: E1216 07:44:58.104732    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t4gtq\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="b5cbe5d5-2616-47da-9e2c-3320915cc6a2" pod="kube-system/kube-proxy-t4gtq"
	Dec 16 07:44:58 pause-375517 kubelet[1308]: E1216 07:44:58.104962    1308 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-cmscz\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="c976794b-50e2-4508-a421-d94c7c247cd7" pod="kube-system/kindnet-cmscz"
	Dec 16 07:45:02 pause-375517 kubelet[1308]: E1216 07:45:02.761515    1308 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-t4gtq\" is forbidden: User \"system:node:pause-375517\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-375517' and this object" podUID="b5cbe5d5-2616-47da-9e2c-3320915cc6a2" pod="kube-system/kube-proxy-t4gtq"
	Dec 16 07:45:02 pause-375517 kubelet[1308]: E1216 07:45:02.761896    1308 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-375517\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-375517' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Dec 16 07:45:02 pause-375517 kubelet[1308]: E1216 07:45:02.762265    1308 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-375517\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-375517' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Dec 16 07:45:02 pause-375517 kubelet[1308]: E1216 07:45:02.875359    1308 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-cmscz\" is forbidden: User \"system:node:pause-375517\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-375517' and this object" podUID="c976794b-50e2-4508-a421-d94c7c247cd7" pod="kube-system/kindnet-cmscz"
	Dec 16 07:45:02 pause-375517 kubelet[1308]: E1216 07:45:02.932712    1308 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-92vwf\" is forbidden: User \"system:node:pause-375517\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-375517' and this object" podUID="e9e3bf22-e909-407e-ac7a-67a7dbb2a7b9" pod="kube-system/coredns-66bc5c9577-92vwf"
	Dec 16 07:45:03 pause-375517 kubelet[1308]: E1216 07:45:03.058028    1308 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-375517\" is forbidden: User \"system:node:pause-375517\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-375517' and this object" podUID="15cf344781b2c6baaaa5ac04b97ac867" pod="kube-system/kube-controller-manager-pause-375517"
	Dec 16 07:45:09 pause-375517 kubelet[1308]: W1216 07:45:09.122602    1308 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 16 07:45:17 pause-375517 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 16 07:45:17 pause-375517 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 16 07:45:17 pause-375517 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-375517 -n pause-375517
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-375517 -n pause-375517: exit status 2 (362.894843ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-375517 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (7.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (7200.084s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1216 08:10:20.852152 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/default-k8s-diff-port-129713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1216 08:10:40.975643 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/enable-default-cni-829423/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1216 08:10:48.555736 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/default-k8s-diff-port-129713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1216 08:11:06.670894 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1216 08:11:06.817321 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1216 08:11:15.044990 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/flannel-829423/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1216 08:11:19.328064 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/auto-829423/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1216 08:11:31.122412 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/custom-flannel-829423/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1216 08:12:04.040593 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/enable-default-cni-829423/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1216 08:12:38.904526 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/bridge-829423/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
panic: test timed out after 2h0m0s
	running tests:
		TestNetworkPlugins (41m39s)
		TestNetworkPlugins/group (19m3s)
		TestStartStop (29m27s)
		TestStartStop/group/newest-cni (10m54s)
		TestStartStop/group/newest-cni/serial (10m54s)
		TestStartStop/group/newest-cni/serial/SecondStart (47s)
		TestStartStop/group/no-preload (19m3s)
		TestStartStop/group/no-preload/serial (19m3s)
		TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (2m32s)

                                                
                                                
goroutine 5428 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2682 +0x2b0
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x38

                                                
                                                
goroutine 1 [chan receive, 36 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x40004c6700, 0x40006d9bb8)
	/usr/local/go/src/testing/testing.go:1940 +0x104
testing.runTests(0x400047e000, {0x534c680, 0x2c, 0x2c}, {0x40006d9d08?, 0x125774?, 0x5375080?})
	/usr/local/go/src/testing/testing.go:2475 +0x3b8
testing.(*M).Run(0x40006a8aa0)
	/usr/local/go/src/testing/testing.go:2337 +0x530
k8s.io/minikube/test/integration.TestMain(0x40006a8aa0)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:64 +0xf0
main.main()
	_testmain.go:133 +0x88

                                                
                                                
goroutine 3737 [chan receive, 19 minutes]:
testing.(*T).Run(0x40013e9180, {0x296eb91?, 0x0?}, 0x400205b200)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1(0x40013e9180)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:128 +0x7e4
testing.tRunner(0x40013e9180, 0x4002324200)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3733
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3794 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40000823f0}, 0x40017c6740, 0x4001500f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40000823f0}, 0xb8?, 0x40017c6740, 0x40017c6788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40000823f0?}, 0x6d6974203a5d3534?, 0x2d35323032223d65?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4001508300?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3790
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 3894 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x4004f64a80?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3893
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 180 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 179
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 179 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40000823f0}, 0x4005022f40, 0x4005022f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40000823f0}, 0xf0?, 0x4005022f40, 0x4005022f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40000823f0?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x4000451050?, 0x36e6570?, 0x4001592ae0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 157
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 156 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x4000280900?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 171
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 3897 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0x4001560550, 0x15)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001560540)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001c4a8a0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x53a3240?, 0x2a0ac?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40000823f0?}, 0xffffbba9e108?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40000823f0}, 0x4001506f38, {0x369e520, 0x4001720000}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40002b0d90?, {0x369e520?, 0x4001720000?}, 0xe0?, 0x40017c8fa8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40015550e0, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3895
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 4011 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40000823f0}, 0x400141ef40, 0x400141ef88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40000823f0}, 0xe0?, 0x400141ef40, 0x400141ef88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40000823f0?}, 0x4001513380?, 0x40002ebb80?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x40016a4000?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4035
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4619 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4618
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4452 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4451
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4169 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x4000741750, 0x15)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4000741740)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001630840)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001b32150?, 0x6fc23ac00?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40000823f0?}, 0x22ee5c0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40000823f0}, 0x4001445f38, {0x369e520, 0x40015933b0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x369e520?, 0x40015933b0?}, 0x50?, 0x400142c600?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4002372b70, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4166
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 5050 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36e6618, 0x400023efc0}, {0x36d4660, 0x4004ebbba0}, 0x1, 0x0, 0x40015c3be0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/loop.go:66 +0x158
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36e6618?, 0x40002b2690?}, 0x3b9aca00, 0x40015c3e08?, 0x1, 0x40015c3be0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:48 +0x8c
k8s.io/minikube/test/integration.PodWait({0x36e6618, 0x40002b2690}, 0x4001984a80, {0x40006c97e8, 0x11}, {0x29941e1, 0x14}, {0x29ac150, 0x1c}, 0x7dba821800)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:380 +0x22c
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x36e6618, 0x40002b2690}, 0x4001984a80, {0x40006c97e8, 0x11}, {0x29786f9?, 0x2c8b3a8500161e84?}, {0x694113ec?, 0x400136af58?}, {0x161f08?, ...})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:272 +0xf8
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0x4001984a80?)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:154 +0x44
testing.tRunner(0x4001984a80, 0x400205a000)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4602
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 178 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x4000662950, 0x2d)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4000662940)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40006f6a20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40004ca4d0?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40000823f0?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40000823f0}, 0x400043af38, {0x369e520, 0x40014114a0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f42d0?, {0x369e520?, 0x40014114a0?}, 0x30?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4000861a80, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 157
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 157 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40006f6a20, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 171
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 3899 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3898
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4889 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4888
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 1986 [chan send, 79 minutes]:
os/exec.(*Cmd).watchCtx(0x4000281680, 0x4001b33500)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1985
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 4990 [chan receive]:
testing.(*T).Run(0x4001964540, {0x297a830?, 0x40000006ee?}, 0x4001626200)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0x4001964540)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:153 +0x1b8
testing.tRunner(0x4001964540, 0x40020b8100)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3735
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4165 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x40000823f0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4161
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 1192 [select, 109 minutes]:
net/http.(*persistConn).writeLoop(0x40016d86c0)
	/usr/local/go/src/net/http/transport.go:2600 +0x94
created by net/http.(*Transport).dialConn in goroutine 1189
	/usr/local/go/src/net/http/transport.go:1948 +0x1164

                                                
                                                
goroutine 4010 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x4000662f50, 0x15)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4000662f40)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40006f6060)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x400145cbd0?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40000823f0?}, 0x40017c8ea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40000823f0}, 0x400043ef38, {0x369e520, 0x40004369f0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40017c8fa8?, {0x369e520?, 0x40004369f0?}, 0xc0?, 0x4001499980?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001351020, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4035
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 3789 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x4000778000?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3785
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 4034 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x4004f64a80?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4033
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 872 [sync.Cond.Wait, 6 minutes]:
sync.runtime_notifyListWait(0x4000460d10, 0x2b)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4000460d00)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40016304e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40002e6540?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40000823f0?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40000823f0}, 0x40000d8f38, {0x369e520, 0x40015f0540}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f42d0?, {0x369e520?, 0x40015f0540?}, 0x50?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40006f5da0, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 844
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 4448 [chan receive, 21 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001b4ce40, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4446
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 4256 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4255
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4614 [chan receive, 19 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001707080, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4609
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 3898 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40000823f0}, 0x400141df40, 0x4001504f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40000823f0}, 0x96?, 0x400141df40, 0x400141df88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40000823f0?}, 0x0?, 0x400141df50?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f42d0?, 0x40001bc080?, 0x4004f64a80?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3895
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4170 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40000823f0}, 0x40017c9f40, 0x40017c9f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40000823f0}, 0x0?, 0x40017c9f40, 0x40017c9f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40000823f0?}, 0x4000266ee0?, 0x400049c0f0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4000281380?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4166
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 3255 [chan receive, 42 minutes]:
testing.(*T).Run(0x40015208c0, {0x296d71f?, 0x212bb5520ab5?}, 0x40016d6228)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestNetworkPlugins(0x40015208c0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:52 +0xe4
testing.tRunner(0x40015208c0, 0x339baf0)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4887 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x4001561550, 0x12)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001561540)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001addc80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x400157c230?, 0x6fc23ac00?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40000823f0?}, 0x22ee5c0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40000823f0}, 0x4001367f38, {0x369e520, 0x400034ca20}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x369e520?, 0x400034ca20?}, 0x5c?, 0x4000281500?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001c3d840, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4884
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 3355 [chan receive, 19 minutes]:
testing.(*testState).waitParallel(0x40007260a0)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1906 +0x4c4
testing.tRunner(0x40019841c0, 0x40016d6228)
	/usr/local/go/src/testing/testing.go:1940 +0x104
created by testing.(*T).Run in goroutine 3255
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 844 [chan receive, 111 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40016304e0, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 842
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 4613 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x40019656c0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4609
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 676 [IO wait, 113 minutes]:
internal/poll.runtime_pollWait(0xffff74eb2c00, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x400048e680?, 0x2d970?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x400048e680)
	/usr/local/go/src/internal/poll/fd_unix.go:613 +0x21c
net.(*netFD).accept(0x400048e680)
	/usr/local/go/src/net/fd_unix.go:161 +0x28
net.(*TCPListener).accept(0x4002324740)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x24
net.(*TCPListener).Accept(0x4002324740)
	/usr/local/go/src/net/tcpsock.go:380 +0x2c
net/http.(*Server).Serve(0x40004ccd00, {0x36d4000, 0x4002324740})
	/usr/local/go/src/net/http/server.go:3463 +0x24c
net/http.(*Server).ListenAndServe(0x40004ccd00)
	/usr/local/go/src/net/http/server.go:3389 +0x80
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 674
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x104

                                                
                                                
goroutine 4883 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4879
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 4012 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4011
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 874 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 873
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 3795 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3794
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 1098 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0x400174a780, 0x400170ca80)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1097
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 4888 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40000823f0}, 0x4001540740, 0x4001540788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40000823f0}, 0x83?, 0x4001540740, 0x4001540788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40000823f0?}, 0x0?, 0x4001540750?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f42d0?, 0x40001bc080?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4884
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 1090 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0x400174a180, 0x400170c850)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 791
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 1954 [chan send, 79 minutes]:
os/exec.(*Cmd).watchCtx(0x4000280900, 0x4001b32770)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1953
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 1191 [select, 109 minutes]:
net/http.(*persistConn).readLoop(0x40016d86c0)
	/usr/local/go/src/net/http/transport.go:2398 +0xa6c
created by net/http.(*Transport).dialConn in goroutine 1189
	/usr/local/go/src/net/http/transport.go:1947 +0x111c

                                                
                                                
goroutine 4254 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x4000663710, 0x14)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4000663700)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x400177e1e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x400145dab0?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40000823f0?}, 0x40017c86b8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40000823f0}, 0x40000d4f38, {0x369e520, 0x40006611d0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40017c8788?, {0x369e520?, 0x40006611d0?}, 0xe0?, 0x40017c87a8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4004f2c5f0, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4267
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 1022 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0x4001571980, 0x400157c9a0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1021
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 5332 [syscall]:
syscall.Syscall6(0x5f, 0x3, 0x10, 0x40013cbb18, 0x4, 0x40025e8750, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:96 +0x2c
internal/syscall/unix.Waitid(0x40013cbc78?, 0x1929a0?, 0xffffe1eb01a1?, 0x0?, 0x4002244b60?)
	/usr/local/go/src/internal/syscall/unix/waitid_linux.go:18 +0x44
os.(*Process).pidfdWait.func1(...)
	/usr/local/go/src/os/pidfd_linux.go:109
os.ignoringEINTR(...)
	/usr/local/go/src/os/file_posix.go:256
os.(*Process).pidfdWait(0x40016e8180)
	/usr/local/go/src/os/pidfd_linux.go:108 +0x144
os.(*Process).wait(0x40013cbc48?)
	/usr/local/go/src/os/exec_unix.go:25 +0x24
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:340
os/exec.(*Cmd).Wait(0x4001940300)
	/usr/local/go/src/os/exec/exec.go:922 +0x38
os/exec.(*Cmd).Run(0x4001940300)
	/usr/local/go/src/os/exec/exec.go:626 +0x38
k8s.io/minikube/test/integration.Run(0x4004f65500, 0x4001940300)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:104 +0x154
k8s.io/minikube/test/integration.validateSecondStart({0x36e6618, 0x400020ed20}, 0x4004f65500, {0x400173c1b0, 0x11}, {0x1135e455?, 0x1135e45500161e84?}, {0x69411456?, 0x40013cbf58?}, {0x40004cc600?, ...})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:254 +0x90
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0x4004f65500?)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:154 +0x44
testing.tRunner(0x4004f65500, 0x4001626200)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4990
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4266 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x4001964a80?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4250
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 4618 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40000823f0}, 0x4001547740, 0x4001446f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40000823f0}, 0xd5?, 0x4001547740, 0x4001547788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40000823f0?}, 0x0?, 0x4001547750?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f42d0?, 0x40001bc080?, 0x40019656c0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4614
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4884 [chan receive, 13 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001addc80, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4879
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 3733 [chan receive, 16 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x40013e88c0, 0x339bd20)
	/usr/local/go/src/testing/testing.go:1940 +0x104
created by testing.(*T).Run in goroutine 3296
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3296 [chan receive, 30 minutes]:
testing.(*T).Run(0x4001521340, {0x296d71f?, 0x400136bf58?}, 0x339bd20)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop(0x4001521340)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:46 +0x3c
testing.tRunner(0x4001521340, 0x339bb38)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4171 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4170
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 1299 [IO wait, 109 minutes]:
internal/poll.runtime_pollWait(0xffff74eb2a00, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4001626580?, 0xdbd0c?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x4001626580)
	/usr/local/go/src/internal/poll/fd_unix.go:613 +0x21c
net.(*netFD).accept(0x4001626580)
	/usr/local/go/src/net/fd_unix.go:161 +0x28
net.(*TCPListener).accept(0x4001934f40)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x24
net.(*TCPListener).Accept(0x4001934f40)
	/usr/local/go/src/net/tcpsock.go:380 +0x2c
net/http.(*Server).Serve(0x4000105b00, {0x36d4000, 0x4001934f40})
	/usr/local/go/src/net/http/server.go:3463 +0x24c
net/http.(*Server).ListenAndServe(0x4000105b00)
	/usr/local/go/src/net/http/server.go:3389 +0x80
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 1297
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x104

                                                
                                                
goroutine 1527 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40000823f0}, 0x400141c740, 0x4001364f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40000823f0}, 0x31?, 0x400141c740, 0x400141c788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40000823f0?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4000280a80?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 1556
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 2065 [chan send, 79 minutes]:
os/exec.(*Cmd).watchCtx(0x4000778480, 0x40016b8620)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1476
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 1555 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x40014c7500?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 1554
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 873 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40000823f0}, 0x4001419740, 0x40000d7f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40000823f0}, 0xf0?, 0x4001419740, 0x4001419788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40000823f0?}, 0x161f90?, 0x4004f65500?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x40013c0d80?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 844
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 843 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x40013c0780?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 842
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 4381 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x4001561490, 0x14)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001561480)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001add440)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x400170c230?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40000823f0?}, 0x40017c8eb8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40000823f0}, 0x4001436f38, {0x369e520, 0x400185dbf0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40017c8f88?, {0x369e520?, 0x400185dbf0?}, 0xe0?, 0x40017c8fa8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001554c40, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4378
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 3895 [chan receive, 25 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001c4a8a0, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3893
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 3790 [chan receive, 27 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40015db6e0, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3785
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 1556 [chan receive, 81 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001add9e0, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 1554
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 3735 [chan receive, 10 minutes]:
testing.(*T).Run(0x40013e8e00, {0x296eb91?, 0x0?}, 0x40020b8100)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1(0x40013e8e00)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:128 +0x7e4
testing.tRunner(0x40013e8e00, 0x4002324180)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3733
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3793 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x4001934a50, 0x16)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001934a40)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40015db6e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x400071a1c0?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40000823f0?}, 0x40017cc6a8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40000823f0}, 0x4005020f38, {0x369e520, 0x40016ee930}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40017cc7a8?, {0x369e520?, 0x40016ee930?}, 0x90?, 0x400020f7a0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40023d8450, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3790
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 4450 [sync.Cond.Wait, 6 minutes]:
sync.runtime_notifyListWait(0x4002325c10, 0x13)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4002325c00)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001b4ce40)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4000082ee0?, 0x54bdd8?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40000823f0?}, 0x40002665c0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40000823f0}, 0x4001439f38, {0x369e520, 0x4001411da0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x369e520?, 0x4001411da0?}, 0xe0?, 0x40016a4d80?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40023d9cf0, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4448
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 4377 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x40016f8300?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4367
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 4382 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40000823f0}, 0x4001540f40, 0x4001440f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40000823f0}, 0xb?, 0x4001540f40, 0x4001540f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40000823f0?}, 0x0?, 0x4001540f50?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f42d0?, 0x40001bc080?, 0x40016f8300?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4378
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4451 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40000823f0}, 0x4001547f40, 0x4001547f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40000823f0}, 0x79?, 0x4001547f40, 0x4001547f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40000823f0?}, 0x0?, 0x4001547f50?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f42d0?, 0x40001bc080?, 0x4004f65880?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4448
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4383 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4382
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4378 [chan receive, 21 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001add440, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4367
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 4255 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40000823f0}, 0x40013fb740, 0x40013fb788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40000823f0}, 0xc7?, 0x40013fb740, 0x40013fb788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40000823f0?}, 0x0?, 0x40013fb750?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f42d0?, 0x40001bc080?, 0x4001964a80?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4267
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4166 [chan receive, 23 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001630840, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4161
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 1526 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x4001934990, 0x24)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001934980)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001add9e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40004912d0?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40000823f0?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40000823f0}, 0x400501df38, {0x369e520, 0x40015f2810}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f42d0?, {0x369e520?, 0x40015f2810?}, 0x30?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x400198bb10, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 1556
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 1528 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 1527
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4617 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0x4001934850, 0x13)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001934840)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001707080)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001b32620?, 0x3935653866303563?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40000823f0?}, 0x2020202020202020?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40000823f0}, 0x4001434f38, {0x369e520, 0x4001745e00}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x3338326431326466?, {0x369e520?, 0x4001745e00?}, 0x60?, 0x6f67612073646e6f?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x400197c820, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4614
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 4035 [chan receive, 23 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40006f6060, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4033
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 4602 [chan receive, 2 minutes]:
testing.(*T).Run(0x4001984000, {0x299a203?, 0x40000006ee?}, 0x400205a000)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0x4001984000)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:153 +0x1b8
testing.tRunner(0x4001984000, 0x400205b200)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3737
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4267 [chan receive, 21 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x400177e1e0, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4250
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 4447 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x4004f65880?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4446
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 5333 [IO wait]:
internal/poll.runtime_pollWait(0xffff74eb2600, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4001706780?, 0x4000c6db51?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x4001706780, {0x4000c6db51, 0x4af, 0x4af})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x40000a67a0, {0x4000c6db51?, 0x4001547568?, 0x8b27c?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x4001358db0, {0x369c8e8, 0x4000724260})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369cae0, 0x4001358db0}, {0x369c8e8, 0x4000724260}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x40000a67a0?, {0x369cae0, 0x4001358db0})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x40000a67a0, {0x369cae0, 0x4001358db0})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369cae0, 0x4001358db0}, {0x369c968, 0x40000a67a0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x4000280780?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 5332
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                                
goroutine 5051 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x4001984a80?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5050
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 5052 [chan receive, 2 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40020aeae0, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5050
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 5083 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x4000662c10, 0x0)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4000662c00)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40020aeae0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x400023ee00?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40000823f0?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40000823f0}, 0x4001444f38, {0x369e520, 0x40016e2240}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f42d0?, {0x369e520?, 0x40016e2240?}, 0xa0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001350f10, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5052
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 5084 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40000823f0}, 0x4001442f40, 0x4001442f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40000823f0}, 0x80?, 0x4001442f40, 0x4001442f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40000823f0?}, 0x4000778900?, 0x400049ea00?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4001964a80?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5052
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 5085 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5084
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 5335 [select]:
os/exec.(*Cmd).watchCtx(0x4001940300, 0x40024a2a80)
	/usr/local/go/src/os/exec/exec.go:789 +0x70
created by os/exec.(*Cmd).Start in goroutine 5332
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 5334 [IO wait]:
internal/poll.runtime_pollWait(0xffff74c20e00, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4001706900?, 0x40017b088a?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x4001706900, {0x40017b088a, 0xf776, 0xf776})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x40000a67e8, {0x40017b088a?, 0x40017ccd68?, 0x8b27c?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x4001358de0, {0x369c8e8, 0x4000724268})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369cae0, 0x4001358de0}, {0x369c8e8, 0x4000724268}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x40000a67e8?, {0x369cae0, 0x4001358de0})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x40000a67e8, {0x369cae0, 0x4001358de0})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369cae0, 0x4001358de0}, {0x369c968, 0x40000a67e8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x4001940180?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 5332
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                    

Test pass (237/316)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 7.67
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.1
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.2/json-events 6.13
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.1
18 TestDownloadOnly/v1.34.2/DeleteAll 0.22
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.35.0-beta.0/json-events 5.88
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.09
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.22
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.65
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.1
36 TestAddons/Setup 170.03
40 TestAddons/serial/GCPAuth/Namespaces 0.21
41 TestAddons/serial/GCPAuth/FakeCredentials 10.89
57 TestAddons/StoppedEnableDisable 12.44
58 TestCertOptions 40.02
59 TestCertExpiration 240.57
61 TestForceSystemdFlag 40.8
62 TestForceSystemdEnv 35.33
67 TestErrorSpam/setup 31.49
68 TestErrorSpam/start 0.94
69 TestErrorSpam/status 1.56
70 TestErrorSpam/pause 5.81
71 TestErrorSpam/unpause 5.58
72 TestErrorSpam/stop 1.52
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 75.99
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 39.36
79 TestFunctional/serial/KubeContext 0.07
80 TestFunctional/serial/KubectlGetPods 0.09
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.44
84 TestFunctional/serial/CacheCmd/cache/add_local 1.31
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.92
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.14
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
92 TestFunctional/serial/ExtraConfig 35.01
93 TestFunctional/serial/ComponentHealth 0.11
94 TestFunctional/serial/LogsCmd 1.47
95 TestFunctional/serial/LogsFileCmd 1.82
96 TestFunctional/serial/InvalidService 4.74
98 TestFunctional/parallel/ConfigCmd 0.48
99 TestFunctional/parallel/DashboardCmd 13.32
100 TestFunctional/parallel/DryRun 0.45
101 TestFunctional/parallel/InternationalLanguage 0.23
102 TestFunctional/parallel/StatusCmd 1.12
106 TestFunctional/parallel/ServiceCmdConnect 7.66
107 TestFunctional/parallel/AddonsCmd 0.15
108 TestFunctional/parallel/PersistentVolumeClaim 20.85
110 TestFunctional/parallel/SSHCmd 0.68
111 TestFunctional/parallel/CpCmd 2.47
113 TestFunctional/parallel/FileSync 0.35
114 TestFunctional/parallel/CertSync 2.27
118 TestFunctional/parallel/NodeLabels 0.11
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.74
122 TestFunctional/parallel/License 0.35
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.67
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.5
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
134 TestFunctional/parallel/ServiceCmd/DeployApp 7.2
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.49
136 TestFunctional/parallel/ProfileCmd/profile_list 0.42
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
138 TestFunctional/parallel/MountCmd/any-port 8.28
139 TestFunctional/parallel/ServiceCmd/List 0.53
140 TestFunctional/parallel/ServiceCmd/JSONOutput 0.49
141 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
142 TestFunctional/parallel/ServiceCmd/Format 0.38
143 TestFunctional/parallel/ServiceCmd/URL 0.41
144 TestFunctional/parallel/MountCmd/specific-port 1.9
145 TestFunctional/parallel/MountCmd/VerifyCleanup 1.88
146 TestFunctional/parallel/Version/short 0.09
147 TestFunctional/parallel/Version/components 1.39
148 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
149 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
150 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
151 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
152 TestFunctional/parallel/ImageCommands/ImageBuild 3.72
153 TestFunctional/parallel/ImageCommands/Setup 0.64
154 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.59
155 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.52
156 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.05
157 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.46
158 TestFunctional/parallel/ImageCommands/ImageRemove 0.64
159 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.79
160 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.62
161 TestFunctional/parallel/UpdateContextCmd/no_changes 0.17
162 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
163 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.4
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.1
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.05
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.29
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.9
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.12
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 0.94
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.01
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.49
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.45
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.19
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.14
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.68
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 2.13
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.28
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.67
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.53
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.3
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.11
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.4
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.37
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.42
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.83
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.84
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.11
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.61
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.22
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.22
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.25
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.22
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.64
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.23
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.2
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.79
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.03
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.36
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.52
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.73
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.39
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.16
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.15
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.14
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.01
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
264 TestMultiControlPlane/serial/StartCluster 200.05
265 TestMultiControlPlane/serial/DeployApp 8.23
266 TestMultiControlPlane/serial/PingHostFromPods 1.66
267 TestMultiControlPlane/serial/AddWorkerNode 60.09
268 TestMultiControlPlane/serial/NodeLabels 0.12
269 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.04
270 TestMultiControlPlane/serial/CopyFile 21.87
271 TestMultiControlPlane/serial/StopSecondaryNode 12.94
272 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.82
273 TestMultiControlPlane/serial/RestartSecondaryNode 33.01
274 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.35
275 TestMultiControlPlane/serial/RestartClusterKeepsNodes 123.74
276 TestMultiControlPlane/serial/DeleteSecondaryNode 12.06
277 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.78
278 TestMultiControlPlane/serial/StopCluster 36.26
281 TestMultiControlPlane/serial/AddSecondaryNode 77.74
287 TestJSONOutput/start/Command 80.88
288 TestJSONOutput/start/Audit 0
290 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
291 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
294 TestJSONOutput/pause/Audit 0
296 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
300 TestJSONOutput/unpause/Audit 0
302 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/stop/Command 5.88
306 TestJSONOutput/stop/Audit 0
308 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
310 TestErrorJSONOutput 0.25
312 TestKicCustomNetwork/create_custom_network 42.44
313 TestKicCustomNetwork/use_default_bridge_network 35.75
314 TestKicExistingNetwork 33.26
315 TestKicCustomSubnet 35.76
316 TestKicStaticIP 39.33
317 TestMainNoArgs 0.06
318 TestMinikubeProfile 73.36
321 TestMountStart/serial/StartWithMountFirst 8.8
322 TestMountStart/serial/VerifyMountFirst 0.26
323 TestMountStart/serial/StartWithMountSecond 9.09
324 TestMountStart/serial/VerifyMountSecond 0.27
325 TestMountStart/serial/DeleteFirst 1.72
326 TestMountStart/serial/VerifyMountPostDelete 0.28
327 TestMountStart/serial/Stop 1.3
328 TestMountStart/serial/RestartStopped 7.84
329 TestMountStart/serial/VerifyMountPostStop 0.27
332 TestMultiNode/serial/FreshStart2Nodes 133.13
333 TestMultiNode/serial/DeployApp2Nodes 4.94
334 TestMultiNode/serial/PingHostFrom2Pods 0.99
335 TestMultiNode/serial/AddNode 57.8
336 TestMultiNode/serial/MultiNodeLabels 0.11
337 TestMultiNode/serial/ProfileList 0.75
338 TestMultiNode/serial/CopyFile 10.55
339 TestMultiNode/serial/StopNode 2.4
340 TestMultiNode/serial/StartAfterStop 8.75
341 TestMultiNode/serial/RestartKeepsNodes 79.92
342 TestMultiNode/serial/DeleteNode 5.63
343 TestMultiNode/serial/StopMultiNode 24.04
344 TestMultiNode/serial/RestartMultiNode 48.97
345 TestMultiNode/serial/ValidateNameConflict 35.68
350 TestPreload 121.5
352 TestScheduledStopUnix 109.77
355 TestInsufficientStorage 12.54
356 TestRunningBinaryUpgrade 299.8
359 TestMissingContainerUpgrade 109.14
361 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
362 TestNoKubernetes/serial/StartWithK8s 49.79
363 TestNoKubernetes/serial/StartWithStopK8s 105.38
375 TestNoKubernetes/serial/Start 9.15
376 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
377 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
378 TestNoKubernetes/serial/ProfileList 1.03
379 TestNoKubernetes/serial/Stop 1.39
380 TestNoKubernetes/serial/StartNoArgs 8.3
381 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.42
382 TestStoppedBinaryUpgrade/Setup 0.79
383 TestStoppedBinaryUpgrade/Upgrade 301.41
384 TestStoppedBinaryUpgrade/MinikubeLogs 1.7
393 TestPause/serial/Start 81.43
394 TestPause/serial/SecondStartNoReconfiguration 28.94
x
+
TestDownloadOnly/v1.28.0/json-events (7.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-971616 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-971616 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.667238753s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1216 06:13:00.777851 1599255 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1216 06:13:00.777944 1599255 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-971616
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-971616: exit status 85 (96.534044ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-971616 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-971616 │ jenkins │ v1.37.0 │ 16 Dec 25 06:12 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 06:12:53
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 06:12:53.157833 1599260 out.go:360] Setting OutFile to fd 1 ...
	I1216 06:12:53.158049 1599260 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:12:53.158077 1599260 out.go:374] Setting ErrFile to fd 2...
	I1216 06:12:53.158099 1599260 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:12:53.158379 1599260 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	W1216 06:12:53.158554 1599260 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22141-1596013/.minikube/config/config.json: open /home/jenkins/minikube-integration/22141-1596013/.minikube/config/config.json: no such file or directory
	I1216 06:12:53.159014 1599260 out.go:368] Setting JSON to true
	I1216 06:12:53.159920 1599260 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":32125,"bootTime":1765833449,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1216 06:12:53.160014 1599260 start.go:143] virtualization:  
	I1216 06:12:53.165592 1599260 out.go:99] [download-only-971616] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1216 06:12:53.165778 1599260 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball: no such file or directory
	I1216 06:12:53.165849 1599260 notify.go:221] Checking for updates...
	I1216 06:12:53.168942 1599260 out.go:171] MINIKUBE_LOCATION=22141
	I1216 06:12:53.172168 1599260 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 06:12:53.175212 1599260 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:12:53.178222 1599260 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube
	I1216 06:12:53.181279 1599260 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1216 06:12:53.187228 1599260 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1216 06:12:53.187509 1599260 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 06:12:53.212269 1599260 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 06:12:53.212374 1599260 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:12:53.287212 1599260 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-16 06:12:53.276458886 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 06:12:53.287327 1599260 docker.go:319] overlay module found
	I1216 06:12:53.290465 1599260 out.go:99] Using the docker driver based on user configuration
	I1216 06:12:53.290509 1599260 start.go:309] selected driver: docker
	I1216 06:12:53.290524 1599260 start.go:927] validating driver "docker" against <nil>
	I1216 06:12:53.290641 1599260 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:12:53.345505 1599260 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-16 06:12:53.335870199 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 06:12:53.345667 1599260 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 06:12:53.345928 1599260 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1216 06:12:53.346089 1599260 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 06:12:53.349565 1599260 out.go:171] Using Docker driver with root privileges
	I1216 06:12:53.352607 1599260 cni.go:84] Creating CNI manager for ""
	I1216 06:12:53.352676 1599260 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 06:12:53.352686 1599260 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 06:12:53.352760 1599260 start.go:353] cluster config:
	{Name:download-only-971616 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-971616 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:12:53.355720 1599260 out.go:99] Starting "download-only-971616" primary control-plane node in "download-only-971616" cluster
	I1216 06:12:53.355755 1599260 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 06:12:53.358698 1599260 out.go:99] Pulling base image v0.0.48-1765661130-22141 ...
	I1216 06:12:53.358751 1599260 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1216 06:12:53.358941 1599260 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 06:12:53.375814 1599260 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1216 06:12:53.376007 1599260 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory
	I1216 06:12:53.376106 1599260 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1216 06:12:53.413627 1599260 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1216 06:12:53.413652 1599260 cache.go:65] Caching tarball of preloaded images
	I1216 06:12:53.413816 1599260 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1216 06:12:53.417213 1599260 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1216 06:12:53.417241 1599260 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1216 06:12:53.493509 1599260 preload.go:295] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1216 06:12:53.493645 1599260 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-971616 host does not exist
	  To start a cluster, run: "minikube start -p download-only-971616"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-971616
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (6.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-783122 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-783122 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.127864583s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (6.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1216 06:13:07.379006 1599255 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1216 06:13:07.379043 1599255 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-783122
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-783122: exit status 85 (95.14929ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-971616 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-971616 │ jenkins │ v1.37.0 │ 16 Dec 25 06:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ delete  │ -p download-only-971616                                                                                                                                                   │ download-only-971616 │ jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ start   │ -o=json --download-only -p download-only-783122 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-783122 │ jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 06:13:01
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 06:13:01.295536 1599460 out.go:360] Setting OutFile to fd 1 ...
	I1216 06:13:01.295729 1599460 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:13:01.295739 1599460 out.go:374] Setting ErrFile to fd 2...
	I1216 06:13:01.295746 1599460 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:13:01.296035 1599460 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 06:13:01.296449 1599460 out.go:368] Setting JSON to true
	I1216 06:13:01.297340 1599460 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":32133,"bootTime":1765833449,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1216 06:13:01.297417 1599460 start.go:143] virtualization:  
	I1216 06:13:01.300937 1599460 out.go:99] [download-only-783122] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 06:13:01.301195 1599460 notify.go:221] Checking for updates...
	I1216 06:13:01.304020 1599460 out.go:171] MINIKUBE_LOCATION=22141
	I1216 06:13:01.307071 1599460 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 06:13:01.310127 1599460 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:13:01.313070 1599460 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube
	I1216 06:13:01.316001 1599460 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1216 06:13:01.321796 1599460 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1216 06:13:01.322084 1599460 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 06:13:01.352592 1599460 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 06:13:01.352760 1599460 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:13:01.412453 1599460 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2025-12-16 06:13:01.402738208 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 06:13:01.412589 1599460 docker.go:319] overlay module found
	I1216 06:13:01.415623 1599460 out.go:99] Using the docker driver based on user configuration
	I1216 06:13:01.415670 1599460 start.go:309] selected driver: docker
	I1216 06:13:01.415681 1599460 start.go:927] validating driver "docker" against <nil>
	I1216 06:13:01.415809 1599460 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:13:01.468622 1599460 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2025-12-16 06:13:01.459787854 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 06:13:01.468791 1599460 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 06:13:01.469078 1599460 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1216 06:13:01.469236 1599460 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 06:13:01.472374 1599460 out.go:171] Using Docker driver with root privileges
	I1216 06:13:01.475280 1599460 cni.go:84] Creating CNI manager for ""
	I1216 06:13:01.475361 1599460 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 06:13:01.475375 1599460 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 06:13:01.475458 1599460 start.go:353] cluster config:
	{Name:download-only-783122 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-783122 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:13:01.478685 1599460 out.go:99] Starting "download-only-783122" primary control-plane node in "download-only-783122" cluster
	I1216 06:13:01.478709 1599460 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 06:13:01.481598 1599460 out.go:99] Pulling base image v0.0.48-1765661130-22141 ...
	I1216 06:13:01.481641 1599460 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 06:13:01.481756 1599460 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 06:13:01.499259 1599460 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1216 06:13:01.499406 1599460 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory
	I1216 06:13:01.499425 1599460 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory, skipping pull
	I1216 06:13:01.499430 1599460 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in cache, skipping pull
	I1216 06:13:01.499437 1599460 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 as a tarball
	I1216 06:13:01.532216 1599460 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1216 06:13:01.532244 1599460 cache.go:65] Caching tarball of preloaded images
	I1216 06:13:01.532497 1599460 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 06:13:01.535662 1599460 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1216 06:13:01.535721 1599460 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1216 06:13:01.619436 1599460 preload.go:295] Got checksum from GCS API "36a1245638f6169d426638fac0bd307d"
	I1216 06:13:01.619494 1599460 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:36a1245638f6169d426638fac0bd307d -> /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-783122 host does not exist
	  To start a cluster, run: "minikube start -p download-only-783122"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-783122
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (5.88s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-352125 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-352125 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.88315522s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (5.88s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1216 06:13:13.715831 1599255 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
I1216 06:13:13.715866 1599255 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-352125
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-352125: exit status 85 (86.686661ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                       ARGS                                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-971616 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-971616 │ jenkins │ v1.37.0 │ 16 Dec 25 06:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ delete  │ -p download-only-971616                                                                                                                                                          │ download-only-971616 │ jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ start   │ -o=json --download-only -p download-only-783122 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-783122 │ jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ delete  │ -p download-only-783122                                                                                                                                                          │ download-only-783122 │ jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ start   │ -o=json --download-only -p download-only-352125 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-352125 │ jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 06:13:07
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 06:13:07.878126 1599656 out.go:360] Setting OutFile to fd 1 ...
	I1216 06:13:07.878272 1599656 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:13:07.878284 1599656 out.go:374] Setting ErrFile to fd 2...
	I1216 06:13:07.878290 1599656 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:13:07.878543 1599656 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 06:13:07.878974 1599656 out.go:368] Setting JSON to true
	I1216 06:13:07.879870 1599656 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":32139,"bootTime":1765833449,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1216 06:13:07.879949 1599656 start.go:143] virtualization:  
	I1216 06:13:07.883405 1599656 out.go:99] [download-only-352125] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 06:13:07.883689 1599656 notify.go:221] Checking for updates...
	I1216 06:13:07.886627 1599656 out.go:171] MINIKUBE_LOCATION=22141
	I1216 06:13:07.889783 1599656 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 06:13:07.892781 1599656 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:13:07.895785 1599656 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube
	I1216 06:13:07.898775 1599656 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1216 06:13:07.904391 1599656 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1216 06:13:07.904676 1599656 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 06:13:07.938229 1599656 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 06:13:07.938344 1599656 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:13:07.995963 1599656 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-16 06:13:07.986783512 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 06:13:07.996068 1599656 docker.go:319] overlay module found
	I1216 06:13:07.998984 1599656 out.go:99] Using the docker driver based on user configuration
	I1216 06:13:07.999021 1599656 start.go:309] selected driver: docker
	I1216 06:13:07.999028 1599656 start.go:927] validating driver "docker" against <nil>
	I1216 06:13:07.999143 1599656 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:13:08.057805 1599656 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-16 06:13:08.04871101 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 06:13:08.057972 1599656 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 06:13:08.058261 1599656 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1216 06:13:08.058424 1599656 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 06:13:08.061520 1599656 out.go:171] Using Docker driver with root privileges
	I1216 06:13:08.064372 1599656 cni.go:84] Creating CNI manager for ""
	I1216 06:13:08.064453 1599656 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 06:13:08.064485 1599656 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 06:13:08.064625 1599656 start.go:353] cluster config:
	{Name:download-only-352125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:download-only-352125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:13:08.067603 1599656 out.go:99] Starting "download-only-352125" primary control-plane node in "download-only-352125" cluster
	I1216 06:13:08.067638 1599656 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 06:13:08.070590 1599656 out.go:99] Pulling base image v0.0.48-1765661130-22141 ...
	I1216 06:13:08.070645 1599656 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 06:13:08.070704 1599656 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 06:13:08.087477 1599656 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1216 06:13:08.087613 1599656 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory
	I1216 06:13:08.087634 1599656 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory, skipping pull
	I1216 06:13:08.087638 1599656 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in cache, skipping pull
	I1216 06:13:08.087647 1599656 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 as a tarball
	I1216 06:13:08.122341 1599656 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1216 06:13:08.122375 1599656 cache.go:65] Caching tarball of preloaded images
	I1216 06:13:08.122568 1599656 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 06:13:08.125677 1599656 out.go:99] Downloading Kubernetes v1.35.0-beta.0 preload ...
	I1216 06:13:08.125710 1599656 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1216 06:13:08.209354 1599656 preload.go:295] Got checksum from GCS API "e7da2fb676059c00535073e4a61150f1"
	I1216 06:13:08.209450 1599656 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e7da2fb676059c00535073e4a61150f1 -> /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1216 06:13:12.777667 1599656 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1216 06:13:12.778107 1599656 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/download-only-352125/config.json ...
	I1216 06:13:12.778161 1599656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/download-only-352125/config.json: {Name:mk95bce1271a8e6c1fa790c37714a06746cfaffe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:13:12.778381 1599656 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 06:13:12.778590 1599656 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl
	
	
	* The control-plane node download-only-352125 host does not exist
	  To start a cluster, run: "minikube start -p download-only-352125"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-352125
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.65s)

                                                
                                                
=== RUN   TestBinaryMirror
I1216 06:13:15.029176 1599255 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-707915 --alsologtostderr --binary-mirror http://127.0.0.1:38553 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-707915" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-707915
--- PASS: TestBinaryMirror (0.65s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-142606
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-142606: exit status 85 (91.813599ms)

                                                
                                                
-- stdout --
	* Profile "addons-142606" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-142606"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.1s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-142606
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-142606: exit status 85 (99.225115ms)

                                                
                                                
-- stdout --
	* Profile "addons-142606" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-142606"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.10s)

                                                
                                    
x
+
TestAddons/Setup (170.03s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-142606 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-142606 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m50.030150656s)
--- PASS: TestAddons/Setup (170.03s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-142606 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-142606 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.89s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-142606 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-142606 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [16a10a01-65c5-40c7-b5ec-ed523d74b116] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [16a10a01-65c5-40c7-b5ec-ed523d74b116] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.004111187s
addons_test.go:696: (dbg) Run:  kubectl --context addons-142606 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-142606 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-142606 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-142606 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.89s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.44s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-142606
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-142606: (12.161210341s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-142606
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-142606
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-142606
--- PASS: TestAddons/StoppedEnableDisable (12.44s)

                                                
                                    
x
+
TestCertOptions (40.02s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-755102 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-755102 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (37.08928551s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-755102 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-755102 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-755102 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-755102" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-755102
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-755102: (2.123151949s)
--- PASS: TestCertOptions (40.02s)

                                                
                                    
x
+
TestCertExpiration (240.57s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-799129 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-799129 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (39.401305888s)
E1216 07:33:08.326211 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-487532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-799129 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
E1216 07:36:06.670887 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 07:36:06.817676 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-799129 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (18.589165229s)
helpers_test.go:176: Cleaning up "cert-expiration-799129" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-799129
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-799129: (2.578079982s)
--- PASS: TestCertExpiration (240.57s)

                                                
                                    
x
+
TestForceSystemdFlag (40.8s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-583064 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-583064 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.455929105s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-583064 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-583064" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-583064
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-583064: (2.969057181s)
--- PASS: TestForceSystemdFlag (40.80s)

                                                
                                    
x
+
TestForceSystemdEnv (35.33s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-711446 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-711446 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (32.269893507s)
helpers_test.go:176: Cleaning up "force-systemd-env-711446" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-711446
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-711446: (3.063725243s)
--- PASS: TestForceSystemdEnv (35.33s)

                                                
                                    
x
+
TestErrorSpam/setup (31.49s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-625531 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-625531 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-625531 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-625531 --driver=docker  --container-runtime=crio: (31.490585997s)
--- PASS: TestErrorSpam/setup (31.49s)

                                                
                                    
x
+
TestErrorSpam/start (0.94s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-625531 --log_dir /tmp/nospam-625531 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-625531 --log_dir /tmp/nospam-625531 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-625531 --log_dir /tmp/nospam-625531 start --dry-run
--- PASS: TestErrorSpam/start (0.94s)

                                                
                                    
x
+
TestErrorSpam/status (1.56s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-625531 --log_dir /tmp/nospam-625531 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-625531 --log_dir /tmp/nospam-625531 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-625531 --log_dir /tmp/nospam-625531 status
--- PASS: TestErrorSpam/status (1.56s)

                                                
                                    
x
+
TestErrorSpam/pause (5.81s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-625531 --log_dir /tmp/nospam-625531 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-625531 --log_dir /tmp/nospam-625531 pause: exit status 80 (2.41447741s)

                                                
                                                
-- stdout --
	* Pausing node nospam-625531 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T06:20:04Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-625531 --log_dir /tmp/nospam-625531 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-625531 --log_dir /tmp/nospam-625531 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-625531 --log_dir /tmp/nospam-625531 pause: exit status 80 (1.784738789s)

                                                
                                                
-- stdout --
	* Pausing node nospam-625531 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T06:20:05Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-625531 --log_dir /tmp/nospam-625531 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-625531 --log_dir /tmp/nospam-625531 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-625531 --log_dir /tmp/nospam-625531 pause: exit status 80 (1.606539469s)

                                                
                                                
-- stdout --
	* Pausing node nospam-625531 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T06:20:07Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-625531 --log_dir /tmp/nospam-625531 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.81s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.58s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-625531 --log_dir /tmp/nospam-625531 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-625531 --log_dir /tmp/nospam-625531 unpause: exit status 80 (1.86844317s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-625531 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T06:20:09Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-625531 --log_dir /tmp/nospam-625531 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-625531 --log_dir /tmp/nospam-625531 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-625531 --log_dir /tmp/nospam-625531 unpause: exit status 80 (1.878913267s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-625531 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T06:20:11Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-625531 --log_dir /tmp/nospam-625531 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-625531 --log_dir /tmp/nospam-625531 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-625531 --log_dir /tmp/nospam-625531 unpause: exit status 80 (1.830342991s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-625531 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T06:20:13Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-625531 --log_dir /tmp/nospam-625531 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.58s)

                                                
                                    
x
+
TestErrorSpam/stop (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-625531 --log_dir /tmp/nospam-625531 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-625531 --log_dir /tmp/nospam-625531 stop: (1.319059728s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-625531 --log_dir /tmp/nospam-625531 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-625531 --log_dir /tmp/nospam-625531 stop
--- PASS: TestErrorSpam/stop (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/test/nested/copy/1599255/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (75.99s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-487532 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1216 06:21:06.823928 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:21:06.830340 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:21:06.841756 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:21:06.863249 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:21:06.904662 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:21:06.986197 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:21:07.147798 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:21:07.469539 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:21:08.110985 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:21:09.392495 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:21:11.954233 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:21:17.076021 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:21:27.317489 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-487532 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m15.987463609s)
--- PASS: TestFunctional/serial/StartWithProxy (75.99s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.36s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1216 06:21:36.321289 1599255 config.go:182] Loaded profile config "functional-487532": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-487532 --alsologtostderr -v=8
E1216 06:21:47.798956 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-487532 --alsologtostderr -v=8: (39.354554445s)
functional_test.go:678: soft start took 39.355910628s for "functional-487532" cluster.
I1216 06:22:15.676237 1599255 config.go:182] Loaded profile config "functional-487532": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (39.36s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-487532 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-487532 cache add registry.k8s.io/pause:3.1: (1.147789759s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-487532 cache add registry.k8s.io/pause:3.3: (1.202288044s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-487532 cache add registry.k8s.io/pause:latest: (1.092806955s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-487532 /tmp/TestFunctionalserialCacheCmdcacheadd_local2389987981/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 cache add minikube-local-cache-test:functional-487532
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 cache delete minikube-local-cache-test:functional-487532
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-487532
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.92s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-487532 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (312.545748ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.92s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 kubectl -- --context functional-487532 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-487532 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.01s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-487532 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1216 06:22:28.761851 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-487532 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.011035264s)
functional_test.go:776: restart took 35.011131544s for "functional-487532" cluster.
I1216 06:22:58.336321 1599255 config.go:182] Loaded profile config "functional-487532": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (35.01s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-487532 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-487532 logs: (1.470275368s)
--- PASS: TestFunctional/serial/LogsCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.82s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 logs --file /tmp/TestFunctionalserialLogsFileCmd1381877492/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-487532 logs --file /tmp/TestFunctionalserialLogsFileCmd1381877492/001/logs.txt: (1.823348998s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.82s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.74s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-487532 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-487532
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-487532: exit status 115 (394.478298ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32708 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-487532 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-487532 delete -f testdata/invalidsvc.yaml: (1.096337037s)
--- PASS: TestFunctional/serial/InvalidService (4.74s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-487532 config get cpus: exit status 14 (71.537098ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-487532 config get cpus: exit status 14 (74.907415ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-487532 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-487532 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 1624402: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.32s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-487532 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-487532 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (209.458158ms)

                                                
                                                
-- stdout --
	* [functional-487532] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22141
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 06:23:36.649929 1624065 out.go:360] Setting OutFile to fd 1 ...
	I1216 06:23:36.650144 1624065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:23:36.650171 1624065 out.go:374] Setting ErrFile to fd 2...
	I1216 06:23:36.650192 1624065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:23:36.650486 1624065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 06:23:36.650951 1624065 out.go:368] Setting JSON to false
	I1216 06:23:36.652155 1624065 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":32768,"bootTime":1765833449,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1216 06:23:36.652248 1624065 start.go:143] virtualization:  
	I1216 06:23:36.655304 1624065 out.go:179] * [functional-487532] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 06:23:36.658974 1624065 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 06:23:36.659060 1624065 notify.go:221] Checking for updates...
	I1216 06:23:36.664998 1624065 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 06:23:36.667825 1624065 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:23:36.670529 1624065 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube
	I1216 06:23:36.673418 1624065 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 06:23:36.676179 1624065 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 06:23:36.679557 1624065 config.go:182] Loaded profile config "functional-487532": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 06:23:36.680254 1624065 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 06:23:36.717424 1624065 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 06:23:36.717560 1624065 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:23:36.787623 1624065 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-16 06:23:36.778569597 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 06:23:36.787725 1624065 docker.go:319] overlay module found
	I1216 06:23:36.790883 1624065 out.go:179] * Using the docker driver based on existing profile
	I1216 06:23:36.793731 1624065 start.go:309] selected driver: docker
	I1216 06:23:36.793759 1624065 start.go:927] validating driver "docker" against &{Name:functional-487532 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-487532 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:23:36.793930 1624065 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 06:23:36.797423 1624065 out.go:203] 
	W1216 06:23:36.800280 1624065 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1216 06:23:36.803180 1624065 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-487532 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-487532 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-487532 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (230.03574ms)

                                                
                                                
-- stdout --
	* [functional-487532] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22141
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 06:23:36.433859 1624016 out.go:360] Setting OutFile to fd 1 ...
	I1216 06:23:36.434078 1624016 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:23:36.434110 1624016 out.go:374] Setting ErrFile to fd 2...
	I1216 06:23:36.434134 1624016 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:23:36.435972 1624016 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 06:23:36.436545 1624016 out.go:368] Setting JSON to false
	I1216 06:23:36.437636 1624016 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":32768,"bootTime":1765833449,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1216 06:23:36.437738 1624016 start.go:143] virtualization:  
	I1216 06:23:36.441288 1624016 out.go:179] * [functional-487532] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1216 06:23:36.444411 1624016 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 06:23:36.444520 1624016 notify.go:221] Checking for updates...
	I1216 06:23:36.450879 1624016 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 06:23:36.453915 1624016 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:23:36.456874 1624016 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube
	I1216 06:23:36.459804 1624016 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 06:23:36.462960 1624016 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 06:23:36.466993 1624016 config.go:182] Loaded profile config "functional-487532": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 06:23:36.467697 1624016 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 06:23:36.503726 1624016 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 06:23:36.503916 1624016 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:23:36.577784 1624016 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-16 06:23:36.562135154 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 06:23:36.577907 1624016 docker.go:319] overlay module found
	I1216 06:23:36.581080 1624016 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1216 06:23:36.583759 1624016 start.go:309] selected driver: docker
	I1216 06:23:36.583790 1624016 start.go:927] validating driver "docker" against &{Name:functional-487532 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-487532 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:23:36.583891 1624016 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 06:23:36.587483 1624016 out.go:203] 
	W1216 06:23:36.590467 1624016 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1216 06:23:36.593242 1624016 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-487532 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-487532 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-4t4xc" [af582525-522e-4605-b3e8-75deda3b43dc] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-4t4xc" [af582525-522e-4605-b3e8-75deda3b43dc] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003719534s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31028
functional_test.go:1680: http://192.168.49.2:31028: success! body:
Request served by hello-node-connect-7d85dfc575-4t4xc

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31028
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.66s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (20.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [6887f3e5-9a94-44e9-9fc2-446b0e478f69] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004107654s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-487532 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-487532 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-487532 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-487532 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [f12f6531-9950-452f-9788-4c1f1042b222] Pending
helpers_test.go:353: "sp-pod" [f12f6531-9950-452f-9788-4c1f1042b222] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [f12f6531-9950-452f-9788-4c1f1042b222] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003273915s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-487532 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-487532 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-487532 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [4f76f47e-e376-4d6c-b2ed-e663593bda15] Pending
helpers_test.go:353: "sp-pod" [4f76f47e-e376-4d6c-b2ed-e663593bda15] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003235707s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-487532 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (20.85s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 ssh -n functional-487532 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 cp functional-487532:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd545263348/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 ssh -n functional-487532 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 ssh -n functional-487532 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.47s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1599255/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 ssh "sudo cat /etc/test/nested/copy/1599255/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1599255.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 ssh "sudo cat /etc/ssl/certs/1599255.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1599255.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 ssh "sudo cat /usr/share/ca-certificates/1599255.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/15992552.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 ssh "sudo cat /etc/ssl/certs/15992552.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/15992552.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 ssh "sudo cat /usr/share/ca-certificates/15992552.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.27s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-487532 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-487532 ssh "sudo systemctl is-active docker": exit status 1 (342.51927ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-487532 ssh "sudo systemctl is-active containerd": exit status 1 (398.687922ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-487532 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-487532 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-487532 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-487532 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 1621837: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-487532 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-487532 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [327db0ff-ca79-4748-8539-b8837ef3f659] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [327db0ff-ca79-4748-8539-b8837ef3f659] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.002972146s
I1216 06:23:17.821656 1599255 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-487532 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.172.184 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-487532 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-487532 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-487532 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-cknk9" [c142421b-7184-4d9b-a787-cdfce18ea446] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-cknk9" [c142421b-7184-4d9b-a787-cdfce18ea446] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.008233168s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "354.903767ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "62.688424ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "357.990536ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "56.752306ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-487532 /tmp/TestFunctionalparallelMountCmdany-port401346553/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765866211141594707" to /tmp/TestFunctionalparallelMountCmdany-port401346553/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765866211141594707" to /tmp/TestFunctionalparallelMountCmdany-port401346553/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765866211141594707" to /tmp/TestFunctionalparallelMountCmdany-port401346553/001/test-1765866211141594707
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-487532 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (345.87284ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 06:23:31.489279 1599255 retry.go:31] will retry after 547.331196ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 16 06:23 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 16 06:23 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 16 06:23 test-1765866211141594707
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 ssh cat /mount-9p/test-1765866211141594707
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-487532 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [673a830f-e194-4fc4-85b2-6576c0e387ee] Pending
helpers_test.go:353: "busybox-mount" [673a830f-e194-4fc4-85b2-6576c0e387ee] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [673a830f-e194-4fc4-85b2-6576c0e387ee] Running
helpers_test.go:353: "busybox-mount" [673a830f-e194-4fc4-85b2-6576c0e387ee] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [673a830f-e194-4fc4-85b2-6576c0e387ee] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003946573s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-487532 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-487532 /tmp/TestFunctionalparallelMountCmdany-port401346553/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 service list -o json
functional_test.go:1504: Took "492.947495ms" to run "out/minikube-linux-arm64 -p functional-487532 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:32022
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32022
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-487532 /tmp/TestFunctionalparallelMountCmdspecific-port4103838059/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-487532 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (374.107567ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 06:23:39.794576 1599255 retry.go:31] will retry after 371.741525ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-487532 /tmp/TestFunctionalparallelMountCmdspecific-port4103838059/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-487532 ssh "sudo umount -f /mount-9p": exit status 1 (333.795455ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-487532 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-487532 /tmp/TestFunctionalparallelMountCmdspecific-port4103838059/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-487532 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3625700184/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-487532 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3625700184/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-487532 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3625700184/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-487532 ssh "findmnt -T" /mount1: (1.022843882s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-487532 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-487532 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3625700184/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-487532 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3625700184/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-487532 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3625700184/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-487532 version -o=json --components: (1.390120215s)
--- PASS: TestFunctional/parallel/Version/components (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-487532 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
localhost/minikube-local-cache-test:functional-487532
localhost/kicbase/echo-server:functional-487532
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-487532 image ls --format short --alsologtostderr:
I1216 06:23:53.049996 1626649 out.go:360] Setting OutFile to fd 1 ...
I1216 06:23:53.050204 1626649 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 06:23:53.050239 1626649 out.go:374] Setting ErrFile to fd 2...
I1216 06:23:53.050262 1626649 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 06:23:53.050557 1626649 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
I1216 06:23:53.051229 1626649 config.go:182] Loaded profile config "functional-487532": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 06:23:53.051452 1626649 config.go:182] Loaded profile config "functional-487532": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 06:23:53.052082 1626649 cli_runner.go:164] Run: docker container inspect functional-487532 --format={{.State.Status}}
I1216 06:23:53.074275 1626649 ssh_runner.go:195] Run: systemctl --version
I1216 06:23:53.074349 1626649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-487532
I1216 06:23:53.099207 1626649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-487532/id_rsa Username:docker}
I1216 06:23:53.195544 1626649 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-487532 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ localhost/minikube-local-cache-test     │ functional-487532  │ abfd07d6f4ed5 │ 3.33kB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ docker.io/kicbase/echo-server           │ latest             │ ce2d2cda2d858 │ 4.79MB │
│ localhost/kicbase/echo-server           │ functional-487532  │ ce2d2cda2d858 │ 4.79MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ public.ecr.aws/nginx/nginx              │ alpine             │ 10afed3caf3ee │ 55.1MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ 2c5f0dedd21c2 │ 60.9MB │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 94bff1bec29fd │ 75.9MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ b178af3d91f80 │ 84.8MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 1b34917560f09 │ 72.6MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 4f982e73e768a │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-487532 image ls --format table --alsologtostderr:
I1216 06:23:54.512961 1626955 out.go:360] Setting OutFile to fd 1 ...
I1216 06:23:54.513130 1626955 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 06:23:54.513140 1626955 out.go:374] Setting ErrFile to fd 2...
I1216 06:23:54.513146 1626955 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 06:23:54.513403 1626955 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
I1216 06:23:54.514038 1626955 config.go:182] Loaded profile config "functional-487532": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 06:23:54.514156 1626955 config.go:182] Loaded profile config "functional-487532": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 06:23:54.514714 1626955 cli_runner.go:164] Run: docker container inspect functional-487532 --format={{.State.Status}}
I1216 06:23:54.532699 1626955 ssh_runner.go:195] Run: systemctl --version
I1216 06:23:54.532759 1626955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-487532
I1216 06:23:54.552597 1626955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-487532/id_rsa Username:docker}
I1216 06:23:54.647698 1626955 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-487532 image ls --format json --alsologtostderr:
[{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"2c5f0dedd21c25ec3a
6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"60857170"},{"id":"b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9a94f333d6fe202d804910534ef052b2cfa650982cdcbe48e92339c8d314dd84","registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"84753391"},{"id":"1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4b3abd4d4543ac8451f97e9771aa0a29a9958e51ac02fe44900b4a224031df89","registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb"],"repoTags":["regis
try.k8s.io/kube-controller-manager:v1.34.2"],"size":"72629077"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b","docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b","localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"
repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-487532"],"size":"4789170"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"abfd07d6f4ed5b412fcfa2dc43f2e62b8c321c37739550fda77c9d79e25a057f","repoDigests":["localhost/minikube-local-cache-test@sha256:a1c07ebc110f06a805620ef88c385c0bbf05f6bdf7f69500748c39f11fabc2d2"],"repoTags":["localhost/minikube-local-cache-test:functional-487532"],"size":"3330"},{"id":"94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786","repoDigests":["registry.k8s.io/kube-proxy@sha256:20a31b16a001e3e4db71a17ba8effc4b145a3afa2086e844ab40dc5baa5b8d12","registry.k8s.io/kube-proxy@sha256:d8b8
43ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"75941783"},{"id":"4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949","repoDigests":["registry.k8s.io/kube-scheduler@sha256:3eff58b308cdc6c65cf030333090e14cc77bea4ed4ea9a92d212a0babc924ffe","registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"51592021"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70
f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571
b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"10afed3caf3eed1b711b8fa0a9600a7b488a45653a15a598a47ac570c1204cc4","repoDigests":["public.ecr.aws/nginx/nginx@sha256:2faa7e87b6fbce823070978247970cea2ad90b1936e84eeae1bd2680b03c168d","public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55077248"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-487532 image ls --format json --alsologtostderr:
I1216 06:23:54.284797 1626918 out.go:360] Setting OutFile to fd 1 ...
I1216 06:23:54.284915 1626918 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 06:23:54.284926 1626918 out.go:374] Setting ErrFile to fd 2...
I1216 06:23:54.284931 1626918 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 06:23:54.285213 1626918 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
I1216 06:23:54.285823 1626918 config.go:182] Loaded profile config "functional-487532": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 06:23:54.285943 1626918 config.go:182] Loaded profile config "functional-487532": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 06:23:54.286473 1626918 cli_runner.go:164] Run: docker container inspect functional-487532 --format={{.State.Status}}
I1216 06:23:54.304526 1626918 ssh_runner.go:195] Run: systemctl --version
I1216 06:23:54.304583 1626918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-487532
I1216 06:23:54.322234 1626918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-487532/id_rsa Username:docker}
I1216 06:23:54.431152 1626918 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-487532 image ls --format yaml --alsologtostderr:
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: 2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "60857170"
- id: 1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4b3abd4d4543ac8451f97e9771aa0a29a9958e51ac02fe44900b4a224031df89
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "72629077"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b
- docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-487532
size: "4789170"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: abfd07d6f4ed5b412fcfa2dc43f2e62b8c321c37739550fda77c9d79e25a057f
repoDigests:
- localhost/minikube-local-cache-test@sha256:a1c07ebc110f06a805620ef88c385c0bbf05f6bdf7f69500748c39f11fabc2d2
repoTags:
- localhost/minikube-local-cache-test:functional-487532
size: "3330"
- id: b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9a94f333d6fe202d804910534ef052b2cfa650982cdcbe48e92339c8d314dd84
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "84753391"
- id: 4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:3eff58b308cdc6c65cf030333090e14cc77bea4ed4ea9a92d212a0babc924ffe
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "51592021"
- id: 10afed3caf3eed1b711b8fa0a9600a7b488a45653a15a598a47ac570c1204cc4
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:2faa7e87b6fbce823070978247970cea2ad90b1936e84eeae1bd2680b03c168d
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55077248"
- id: 94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786
repoDigests:
- registry.k8s.io/kube-proxy@sha256:20a31b16a001e3e4db71a17ba8effc4b145a3afa2086e844ab40dc5baa5b8d12
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "75941783"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-487532 image ls --format yaml --alsologtostderr:
I1216 06:23:53.309854 1626734 out.go:360] Setting OutFile to fd 1 ...
I1216 06:23:53.310079 1626734 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 06:23:53.310092 1626734 out.go:374] Setting ErrFile to fd 2...
I1216 06:23:53.310097 1626734 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 06:23:53.310385 1626734 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
I1216 06:23:53.311167 1626734 config.go:182] Loaded profile config "functional-487532": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 06:23:53.311291 1626734 config.go:182] Loaded profile config "functional-487532": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 06:23:53.311942 1626734 cli_runner.go:164] Run: docker container inspect functional-487532 --format={{.State.Status}}
I1216 06:23:53.330201 1626734 ssh_runner.go:195] Run: systemctl --version
I1216 06:23:53.330259 1626734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-487532
I1216 06:23:53.348856 1626734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-487532/id_rsa Username:docker}
I1216 06:23:53.446954 1626734 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-487532 ssh pgrep buildkitd: exit status 1 (290.038849ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 image build -t localhost/my-image:functional-487532 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-487532 image build -t localhost/my-image:functional-487532 testdata/build --alsologtostderr: (3.191428712s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-487532 image build -t localhost/my-image:functional-487532 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 8ec97f3c8d1
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-487532
--> 919dbadcb28
Successfully tagged localhost/my-image:functional-487532
919dbadcb2827d7111f40764ce197790e061cf6e8136a0c5a21df8b88d007f20
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-487532 image build -t localhost/my-image:functional-487532 testdata/build --alsologtostderr:
I1216 06:23:53.837029 1626851 out.go:360] Setting OutFile to fd 1 ...
I1216 06:23:53.838468 1626851 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 06:23:53.838491 1626851 out.go:374] Setting ErrFile to fd 2...
I1216 06:23:53.838498 1626851 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 06:23:53.838912 1626851 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
I1216 06:23:53.839949 1626851 config.go:182] Loaded profile config "functional-487532": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 06:23:53.844129 1626851 config.go:182] Loaded profile config "functional-487532": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 06:23:53.845056 1626851 cli_runner.go:164] Run: docker container inspect functional-487532 --format={{.State.Status}}
I1216 06:23:53.865338 1626851 ssh_runner.go:195] Run: systemctl --version
I1216 06:23:53.865404 1626851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-487532
I1216 06:23:53.883280 1626851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-487532/id_rsa Username:docker}
I1216 06:23:53.983265 1626851 build_images.go:162] Building image from path: /tmp/build.4273666603.tar
I1216 06:23:53.983391 1626851 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1216 06:23:53.991329 1626851 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4273666603.tar
I1216 06:23:53.995157 1626851 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4273666603.tar: stat -c "%s %y" /var/lib/minikube/build/build.4273666603.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4273666603.tar': No such file or directory
I1216 06:23:53.995188 1626851 ssh_runner.go:362] scp /tmp/build.4273666603.tar --> /var/lib/minikube/build/build.4273666603.tar (3072 bytes)
I1216 06:23:54.021230 1626851 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4273666603
I1216 06:23:54.032220 1626851 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4273666603 -xf /var/lib/minikube/build/build.4273666603.tar
I1216 06:23:54.044551 1626851 crio.go:315] Building image: /var/lib/minikube/build/build.4273666603
I1216 06:23:54.044650 1626851 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-487532 /var/lib/minikube/build/build.4273666603 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1216 06:23:56.954884 1626851 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-487532 /var/lib/minikube/build/build.4273666603 --cgroup-manager=cgroupfs: (2.910202277s)
I1216 06:23:56.954953 1626851 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4273666603
I1216 06:23:56.962702 1626851 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4273666603.tar
I1216 06:23:56.970354 1626851 build_images.go:218] Built localhost/my-image:functional-487532 from /tmp/build.4273666603.tar
I1216 06:23:56.970387 1626851 build_images.go:134] succeeded building to: functional-487532
I1216 06:23:56.970392 1626851 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-487532
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 image load --daemon kicbase/echo-server:functional-487532 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-487532 image load --daemon kicbase/echo-server:functional-487532 --alsologtostderr: (2.215442986s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 image load --daemon kicbase/echo-server:functional-487532 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-487532 image load --daemon kicbase/echo-server:functional-487532 --alsologtostderr: (1.258152264s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-487532
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 image load --daemon kicbase/echo-server:functional-487532 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 image save kicbase/echo-server:functional-487532 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
2025/12/16 06:23:50 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 image rm kicbase/echo-server:functional-487532 --alsologtostderr
E1216 06:23:50.684668 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-487532
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 image save --daemon kicbase/echo-server:functional-487532 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-487532
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-487532 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-487532
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-487532
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-487532
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22141-1596013/.minikube/files/etc/test/nested/copy/1599255/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-364120 cache add registry.k8s.io/pause:3.1: (1.103195078s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-364120 cache add registry.k8s.io/pause:3.3: (1.11751761s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-364120 cache add registry.k8s.io/pause:latest: (1.177845165s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-364120 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach3572696377/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 cache add minikube-local-cache-test:functional-364120
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 cache delete minikube-local-cache-test:functional-364120
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-364120
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.9s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-364120 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (310.040594ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.90s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.94s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 logs
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.94s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs2624562802/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-364120 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs2624562802/001/logs.txt: (1.005846153s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-364120 config get cpus: exit status 14 (82.667249ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-364120 config get cpus: exit status 14 (90.30983ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-364120 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-364120 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (198.134041ms)

                                                
                                                
-- stdout --
	* [functional-364120] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22141
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 06:53:33.427837 1656981 out.go:360] Setting OutFile to fd 1 ...
	I1216 06:53:33.428645 1656981 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:53:33.428656 1656981 out.go:374] Setting ErrFile to fd 2...
	I1216 06:53:33.428662 1656981 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:53:33.428931 1656981 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 06:53:33.429304 1656981 out.go:368] Setting JSON to false
	I1216 06:53:33.430127 1656981 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":34565,"bootTime":1765833449,"procs":163,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1216 06:53:33.430189 1656981 start.go:143] virtualization:  
	I1216 06:53:33.433239 1656981 out.go:179] * [functional-364120] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 06:53:33.436873 1656981 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 06:53:33.437142 1656981 notify.go:221] Checking for updates...
	I1216 06:53:33.442442 1656981 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 06:53:33.445190 1656981 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:53:33.447951 1656981 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube
	I1216 06:53:33.450847 1656981 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 06:53:33.454182 1656981 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 06:53:33.457519 1656981 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 06:53:33.458105 1656981 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 06:53:33.488955 1656981 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 06:53:33.489121 1656981 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:53:33.546043 1656981 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 06:53:33.535883725 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 06:53:33.546157 1656981 docker.go:319] overlay module found
	I1216 06:53:33.549458 1656981 out.go:179] * Using the docker driver based on existing profile
	I1216 06:53:33.552406 1656981 start.go:309] selected driver: docker
	I1216 06:53:33.552434 1656981 start.go:927] validating driver "docker" against &{Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:53:33.552560 1656981 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 06:53:33.556178 1656981 out.go:203] 
	W1216 06:53:33.559107 1656981 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1216 06:53:33.562119 1656981 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-364120 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-364120 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-364120 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (186.034455ms)

                                                
                                                
-- stdout --
	* [functional-364120] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22141
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 06:53:33.232956 1656932 out.go:360] Setting OutFile to fd 1 ...
	I1216 06:53:33.233198 1656932 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:53:33.233234 1656932 out.go:374] Setting ErrFile to fd 2...
	I1216 06:53:33.233259 1656932 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:53:33.233650 1656932 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 06:53:33.234081 1656932 out.go:368] Setting JSON to false
	I1216 06:53:33.234972 1656932 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":34565,"bootTime":1765833449,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1216 06:53:33.235072 1656932 start.go:143] virtualization:  
	I1216 06:53:33.238646 1656932 out.go:179] * [functional-364120] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1216 06:53:33.241824 1656932 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 06:53:33.241906 1656932 notify.go:221] Checking for updates...
	I1216 06:53:33.248380 1656932 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 06:53:33.251431 1656932 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig
	I1216 06:53:33.254289 1656932 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube
	I1216 06:53:33.257179 1656932 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 06:53:33.260025 1656932 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 06:53:33.263388 1656932 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 06:53:33.264027 1656932 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 06:53:33.292633 1656932 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 06:53:33.292776 1656932 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:53:33.347184 1656932 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 06:53:33.338018035 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 06:53:33.347298 1656932 docker.go:319] overlay module found
	I1216 06:53:33.350412 1656932 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1216 06:53:33.353349 1656932 start.go:309] selected driver: docker
	I1216 06:53:33.353374 1656932 start.go:927] validating driver "docker" against &{Name:functional-364120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-364120 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:53:33.353475 1656932 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 06:53:33.357037 1656932 out.go:203] 
	W1216 06:53:33.360016 1656932 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1216 06:53:33.362923 1656932 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.68s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.68s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 ssh -n functional-364120 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 cp functional-364120:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp2475148058/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 ssh -n functional-364120 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 ssh -n functional-364120 "sudo cat /tmp/does/not/exist/cp-test.txt"
E1216 06:51:06.817966 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1599255/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 ssh "sudo cat /etc/test/nested/copy/1599255/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1599255.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 ssh "sudo cat /etc/ssl/certs/1599255.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1599255.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 ssh "sudo cat /usr/share/ca-certificates/1599255.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/15992552.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 ssh "sudo cat /etc/ssl/certs/15992552.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/15992552.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 ssh "sudo cat /usr/share/ca-certificates/15992552.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-364120 ssh "sudo systemctl is-active docker": exit status 1 (270.385225ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-364120 ssh "sudo systemctl is-active containerd": exit status 1 (259.464832ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-364120 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-364120 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "319.38188ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "52.57569ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "360.333936ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "56.236405ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.83s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-364120 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3205506699/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-364120 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (335.590695ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 06:53:26.667755 1599255 retry.go:31] will retry after 467.857188ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-364120 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3205506699/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-364120 ssh "sudo umount -f /mount-9p": exit status 1 (267.443644ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-364120 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-364120 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3205506699/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.83s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-364120 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1494651641/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-364120 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1494651641/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-364120 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1494651641/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-364120 ssh "findmnt -T" /mount1: exit status 1 (539.739406ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 06:53:28.710294 1599255 retry.go:31] will retry after 425.320945ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-364120 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-364120 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1494651641/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-364120 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1494651641/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-364120 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1494651641/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.61s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.61s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-364120 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
localhost/minikube-local-cache-test:functional-364120
localhost/kicbase/echo-server:functional-364120
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-364120 image ls --format short --alsologtostderr:
I1216 06:53:45.708654 1659144 out.go:360] Setting OutFile to fd 1 ...
I1216 06:53:45.708823 1659144 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 06:53:45.708853 1659144 out.go:374] Setting ErrFile to fd 2...
I1216 06:53:45.708873 1659144 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 06:53:45.709140 1659144 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
I1216 06:53:45.709783 1659144 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 06:53:45.709951 1659144 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 06:53:45.710563 1659144 cli_runner.go:164] Run: docker container inspect functional-364120 --format={{.State.Status}}
I1216 06:53:45.728124 1659144 ssh_runner.go:195] Run: systemctl --version
I1216 06:53:45.728187 1659144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
I1216 06:53:45.745113 1659144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
I1216 06:53:45.838896 1659144 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-364120 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 16378741539f1 │ 49.8MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ localhost/minikube-local-cache-test     │ functional-364120  │ abfd07d6f4ed5 │ 3.33kB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ e08f4d9d2e6ed │ 74.5MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 404c2e1286177 │ 74.1MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ 71a676dd070f4 │ 1.63MB │
│ localhost/kicbase/echo-server           │ functional-364120  │ ce2d2cda2d858 │ 4.79MB │
│ localhost/my-image                      │ functional-364120  │ d3d70fb3bc6b1 │ 1.64MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ 2c5f0dedd21c2 │ 60.9MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ ccd634d9bcc36 │ 85MB   │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 68b5f775f1876 │ 72.2MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-364120 image ls --format table --alsologtostderr:
I1216 06:53:50.197911 1659680 out.go:360] Setting OutFile to fd 1 ...
I1216 06:53:50.198187 1659680 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 06:53:50.198203 1659680 out.go:374] Setting ErrFile to fd 2...
I1216 06:53:50.198210 1659680 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 06:53:50.198739 1659680 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
I1216 06:53:50.199425 1659680 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 06:53:50.199590 1659680 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 06:53:50.200130 1659680 cli_runner.go:164] Run: docker container inspect functional-364120 --format={{.State.Status}}
I1216 06:53:50.218355 1659680 ssh_runner.go:195] Run: systemctl --version
I1216 06:53:50.218412 1659680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
I1216 06:53:50.235456 1659680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
I1216 06:53:50.331074 1659680 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-364120 image ls --format json --alsologtostderr:
[{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6","registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"74491780"},{"id":"ccd634d9bcc36ac6235e9c86676cd
3a02c06afc3788a25f1bbf39ca7d44585f4","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58","registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"84949999"},{"id":"404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904","repoDigests":["registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478","registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"74106775"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/paus
e:3.10.1"],"size":"519884"},{"id":"16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6","registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"49822549"},{"id":"9661a9ccf3c05db3d304494a1d0ccbd4dd4c6a6f90504055b3906d0ff39ecb20","repoDigests":["docker.io/library/83d29644dc691ae257a9f197493163dd51bfde33ef5e22c6544f646888056b54-tmp@sha256:7fa7d26ae68cdb296c9aa737701ed9eb31dda2d2809cf22b90c02694e8ae0177"],"repoTags":[],"size":"1638179"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.i
o/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"abfd07d6f4ed5b412fcfa2dc43f2e62b8c321c37739550fda77c9d79e25a057f","repoDigests":["localhost/minikube-local-cache-test@sha256:a1c07ebc110f06a805620ef88c385c0bbf05f6bdf7f69500748c39f11fabc2d2"],"repoTags":["localhost/minikube-local-cache-test:functional-364120"],"size":"3330"},{"id":"68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d","registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"72170325"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5
901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-364120"],"size":"4788229"},{"id":"d3d70fb3bc6b1d6c254a2fff7ba43fc8201a320836589d2e891b8ee2dab76a63","repoDigests":["localhost/my-image@sha256:5a1a419f04c9f8b1fd7dfb5bd40427350a17f3f9d412cfcdda
e0e2afc630c293"],"repoTags":["localhost/my-image:functional-364120"],"size":"1640791"},{"id":"2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"60857170"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-364120 image ls --format json --alsologtostderr:
I1216 06:53:49.953591 1659638 out.go:360] Setting OutFile to fd 1 ...
I1216 06:53:49.953723 1659638 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 06:53:49.953735 1659638 out.go:374] Setting ErrFile to fd 2...
I1216 06:53:49.953741 1659638 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 06:53:49.954282 1659638 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
I1216 06:53:49.954915 1659638 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 06:53:49.955045 1659638 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 06:53:49.955920 1659638 cli_runner.go:164] Run: docker container inspect functional-364120 --format={{.State.Status}}
I1216 06:53:49.974030 1659638 ssh_runner.go:195] Run: systemctl --version
I1216 06:53:49.974099 1659638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
I1216 06:53:50.005019 1659638 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
I1216 06:53:50.103153 1659638 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-364120 image ls --format yaml --alsologtostderr:
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-364120
size: "4788229"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: d3d70fb3bc6b1d6c254a2fff7ba43fc8201a320836589d2e891b8ee2dab76a63
repoDigests:
- localhost/my-image@sha256:5a1a419f04c9f8b1fd7dfb5bd40427350a17f3f9d412cfcddae0e2afc630c293
repoTags:
- localhost/my-image:functional-364120
size: "1640791"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9
- gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
repoTags:
- gcr.io/k8s-minikube/busybox:latest
size: "1634527"
- id: e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
- registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "74491780"
- id: 2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "60857170"
- id: ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
- registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "84949999"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: abfd07d6f4ed5b412fcfa2dc43f2e62b8c321c37739550fda77c9d79e25a057f
repoDigests:
- localhost/minikube-local-cache-test@sha256:a1c07ebc110f06a805620ef88c385c0bbf05f6bdf7f69500748c39f11fabc2d2
repoTags:
- localhost/minikube-local-cache-test:functional-364120
size: "3330"
- id: 68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
- registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "72170325"
- id: 16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
- registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "49822549"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904
repoDigests:
- registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "74106775"
- id: 9661a9ccf3c05db3d304494a1d0ccbd4dd4c6a6f90504055b3906d0ff39ecb20
repoDigests:
- docker.io/library/83d29644dc691ae257a9f197493163dd51bfde33ef5e22c6544f646888056b54-tmp@sha256:7fa7d26ae68cdb296c9aa737701ed9eb31dda2d2809cf22b90c02694e8ae0177
repoTags: []
size: "1638179"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-364120 image ls --format yaml --alsologtostderr:
I1216 06:53:49.725044 1659600 out.go:360] Setting OutFile to fd 1 ...
I1216 06:53:49.725219 1659600 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 06:53:49.725243 1659600 out.go:374] Setting ErrFile to fd 2...
I1216 06:53:49.725264 1659600 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 06:53:49.725644 1659600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
I1216 06:53:49.726615 1659600 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 06:53:49.726806 1659600 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 06:53:49.727692 1659600 cli_runner.go:164] Run: docker container inspect functional-364120 --format={{.State.Status}}
I1216 06:53:49.745544 1659600 ssh_runner.go:195] Run: systemctl --version
I1216 06:53:49.745607 1659600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
I1216 06:53:49.762724 1659600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
I1216 06:53:49.855353 1659600 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.64s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-364120 ssh pgrep buildkitd: exit status 1 (259.076712ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 image build -t localhost/my-image:functional-364120 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-364120 image build -t localhost/my-image:functional-364120 testdata/build --alsologtostderr: (3.149419641s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-364120 image build -t localhost/my-image:functional-364120 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 9661a9ccf3c
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-364120
--> d3d70fb3bc6
Successfully tagged localhost/my-image:functional-364120
d3d70fb3bc6b1d6c254a2fff7ba43fc8201a320836589d2e891b8ee2dab76a63
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-364120 image build -t localhost/my-image:functional-364120 testdata/build --alsologtostderr:
I1216 06:53:46.349025 1659290 out.go:360] Setting OutFile to fd 1 ...
I1216 06:53:46.349240 1659290 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 06:53:46.349267 1659290 out.go:374] Setting ErrFile to fd 2...
I1216 06:53:46.349287 1659290 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 06:53:46.349551 1659290 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
I1216 06:53:46.350220 1659290 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 06:53:46.351435 1659290 config.go:182] Loaded profile config "functional-364120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 06:53:46.352203 1659290 cli_runner.go:164] Run: docker container inspect functional-364120 --format={{.State.Status}}
I1216 06:53:46.370206 1659290 ssh_runner.go:195] Run: systemctl --version
I1216 06:53:46.370312 1659290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-364120
I1216 06:53:46.388756 1659290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/functional-364120/id_rsa Username:docker}
I1216 06:53:46.484027 1659290 build_images.go:162] Building image from path: /tmp/build.3724850361.tar
I1216 06:53:46.484104 1659290 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1216 06:53:46.492027 1659290 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3724850361.tar
I1216 06:53:46.495537 1659290 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3724850361.tar: stat -c "%s %y" /var/lib/minikube/build/build.3724850361.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3724850361.tar': No such file or directory
I1216 06:53:46.495570 1659290 ssh_runner.go:362] scp /tmp/build.3724850361.tar --> /var/lib/minikube/build/build.3724850361.tar (3072 bytes)
I1216 06:53:46.513521 1659290 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3724850361
I1216 06:53:46.521011 1659290 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3724850361 -xf /var/lib/minikube/build/build.3724850361.tar
I1216 06:53:46.529771 1659290 crio.go:315] Building image: /var/lib/minikube/build/build.3724850361
I1216 06:53:46.529891 1659290 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-364120 /var/lib/minikube/build/build.3724850361 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1216 06:53:49.417594 1659290 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-364120 /var/lib/minikube/build/build.3724850361 --cgroup-manager=cgroupfs: (2.887674714s)
I1216 06:53:49.417668 1659290 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3724850361
I1216 06:53:49.425332 1659290 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3724850361.tar
I1216 06:53:49.432947 1659290 build_images.go:218] Built localhost/my-image:functional-364120 from /tmp/build.3724850361.tar
I1216 06:53:49.432979 1659290 build_images.go:134] succeeded building to: functional-364120
I1216 06:53:49.432985 1659290 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.64s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-364120
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 image load --daemon kicbase/echo-server:functional-364120 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.79s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 image load --daemon kicbase/echo-server:functional-364120 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.79s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-364120
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 image load --daemon kicbase/echo-server:functional-364120 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 image save kicbase/echo-server:functional-364120 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 image rm kicbase/echo-server:functional-364120 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.73s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.73s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-364120
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 image save --daemon kicbase/echo-server:functional-364120 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-364120
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-364120 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-364120
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-364120
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-364120
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (200.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1216 06:56:06.675280 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:56:06.683569 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:56:06.694912 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:56:06.716288 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:56:06.757626 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:56:06.817168 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:56:06.839495 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:56:07.000995 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:56:07.322622 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:56:07.964620 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:56:09.245994 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:56:11.808718 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:56:16.930174 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:56:27.171977 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:56:47.654266 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:57:28.616071 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:58:08.325719 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-487532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-614518 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m19.184771654s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (200.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-614518 kubectl -- rollout status deployment/busybox: (5.550082292s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 kubectl -- exec busybox-7b57f96db7-9rkhz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 kubectl -- exec busybox-7b57f96db7-q9kjv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 kubectl -- exec busybox-7b57f96db7-s24fs -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 kubectl -- exec busybox-7b57f96db7-9rkhz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 kubectl -- exec busybox-7b57f96db7-q9kjv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 kubectl -- exec busybox-7b57f96db7-s24fs -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 kubectl -- exec busybox-7b57f96db7-9rkhz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 kubectl -- exec busybox-7b57f96db7-q9kjv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 kubectl -- exec busybox-7b57f96db7-s24fs -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 kubectl -- exec busybox-7b57f96db7-9rkhz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 kubectl -- exec busybox-7b57f96db7-9rkhz -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 kubectl -- exec busybox-7b57f96db7-q9kjv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 kubectl -- exec busybox-7b57f96db7-q9kjv -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 kubectl -- exec busybox-7b57f96db7-s24fs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 kubectl -- exec busybox-7b57f96db7-s24fs -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (60.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 node add --alsologtostderr -v 5
E1216 06:58:50.537739 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-614518 node add --alsologtostderr -v 5: (59.034566385s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-614518 status --alsologtostderr -v 5: (1.059239165s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (60.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-614518 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.040995888s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (21.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-614518 status --output json --alsologtostderr -v 5: (1.076790784s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 cp testdata/cp-test.txt ha-614518:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 ssh -n ha-614518 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 cp ha-614518:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1403810740/001/cp-test_ha-614518.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 ssh -n ha-614518 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 cp ha-614518:/home/docker/cp-test.txt ha-614518-m02:/home/docker/cp-test_ha-614518_ha-614518-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 ssh -n ha-614518 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 ssh -n ha-614518-m02 "sudo cat /home/docker/cp-test_ha-614518_ha-614518-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 cp ha-614518:/home/docker/cp-test.txt ha-614518-m03:/home/docker/cp-test_ha-614518_ha-614518-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 ssh -n ha-614518 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 ssh -n ha-614518-m03 "sudo cat /home/docker/cp-test_ha-614518_ha-614518-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 cp ha-614518:/home/docker/cp-test.txt ha-614518-m04:/home/docker/cp-test_ha-614518_ha-614518-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 ssh -n ha-614518 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 ssh -n ha-614518-m04 "sudo cat /home/docker/cp-test_ha-614518_ha-614518-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 cp testdata/cp-test.txt ha-614518-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 ssh -n ha-614518-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 cp ha-614518-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1403810740/001/cp-test_ha-614518-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 ssh -n ha-614518-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 cp ha-614518-m02:/home/docker/cp-test.txt ha-614518:/home/docker/cp-test_ha-614518-m02_ha-614518.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 ssh -n ha-614518-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 ssh -n ha-614518 "sudo cat /home/docker/cp-test_ha-614518-m02_ha-614518.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 cp ha-614518-m02:/home/docker/cp-test.txt ha-614518-m03:/home/docker/cp-test_ha-614518-m02_ha-614518-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 ssh -n ha-614518-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 ssh -n ha-614518-m03 "sudo cat /home/docker/cp-test_ha-614518-m02_ha-614518-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 cp ha-614518-m02:/home/docker/cp-test.txt ha-614518-m04:/home/docker/cp-test_ha-614518-m02_ha-614518-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 ssh -n ha-614518-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 ssh -n ha-614518-m04 "sudo cat /home/docker/cp-test_ha-614518-m02_ha-614518-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 cp testdata/cp-test.txt ha-614518-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 ssh -n ha-614518-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 cp ha-614518-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1403810740/001/cp-test_ha-614518-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 ssh -n ha-614518-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 cp ha-614518-m03:/home/docker/cp-test.txt ha-614518:/home/docker/cp-test_ha-614518-m03_ha-614518.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 ssh -n ha-614518-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 ssh -n ha-614518 "sudo cat /home/docker/cp-test_ha-614518-m03_ha-614518.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 cp ha-614518-m03:/home/docker/cp-test.txt ha-614518-m02:/home/docker/cp-test_ha-614518-m03_ha-614518-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 ssh -n ha-614518-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 ssh -n ha-614518-m02 "sudo cat /home/docker/cp-test_ha-614518-m03_ha-614518-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 cp ha-614518-m03:/home/docker/cp-test.txt ha-614518-m04:/home/docker/cp-test_ha-614518-m03_ha-614518-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 ssh -n ha-614518-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 ssh -n ha-614518-m04 "sudo cat /home/docker/cp-test_ha-614518-m03_ha-614518-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 cp testdata/cp-test.txt ha-614518-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 ssh -n ha-614518-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 cp ha-614518-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1403810740/001/cp-test_ha-614518-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 ssh -n ha-614518-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 cp ha-614518-m04:/home/docker/cp-test.txt ha-614518:/home/docker/cp-test_ha-614518-m04_ha-614518.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 ssh -n ha-614518-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 ssh -n ha-614518 "sudo cat /home/docker/cp-test_ha-614518-m04_ha-614518.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 cp ha-614518-m04:/home/docker/cp-test.txt ha-614518-m02:/home/docker/cp-test_ha-614518-m04_ha-614518-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 ssh -n ha-614518-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Done: out/minikube-linux-arm64 -p ha-614518 ssh -n ha-614518-m04 "sudo cat /home/docker/cp-test.txt": (1.006156152s)
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 ssh -n ha-614518-m02 "sudo cat /home/docker/cp-test_ha-614518-m04_ha-614518-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 cp ha-614518-m04:/home/docker/cp-test.txt ha-614518-m03:/home/docker/cp-test_ha-614518-m04_ha-614518-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 ssh -n ha-614518-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 ssh -n ha-614518-m03 "sudo cat /home/docker/cp-test_ha-614518-m04_ha-614518-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (21.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-614518 node stop m02 --alsologtostderr -v 5: (12.102950511s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-614518 status --alsologtostderr -v 5: exit status 7 (832.18541ms)

                                                
                                                
-- stdout --
	ha-614518
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-614518-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-614518-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-614518-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 07:00:16.046359 1675348 out.go:360] Setting OutFile to fd 1 ...
	I1216 07:00:16.046479 1675348 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 07:00:16.046490 1675348 out.go:374] Setting ErrFile to fd 2...
	I1216 07:00:16.046496 1675348 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 07:00:16.046805 1675348 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 07:00:16.047005 1675348 out.go:368] Setting JSON to false
	I1216 07:00:16.047039 1675348 mustload.go:66] Loading cluster: ha-614518
	I1216 07:00:16.047143 1675348 notify.go:221] Checking for updates...
	I1216 07:00:16.047489 1675348 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:00:16.047515 1675348 status.go:174] checking status of ha-614518 ...
	I1216 07:00:16.048045 1675348 cli_runner.go:164] Run: docker container inspect ha-614518 --format={{.State.Status}}
	I1216 07:00:16.075141 1675348 status.go:371] ha-614518 host status = "Running" (err=<nil>)
	I1216 07:00:16.075184 1675348 host.go:66] Checking if "ha-614518" exists ...
	I1216 07:00:16.075496 1675348 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614518
	I1216 07:00:16.110683 1675348 host.go:66] Checking if "ha-614518" exists ...
	I1216 07:00:16.111007 1675348 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 07:00:16.111055 1675348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518
	I1216 07:00:16.131981 1675348 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34265 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518/id_rsa Username:docker}
	I1216 07:00:16.254569 1675348 ssh_runner.go:195] Run: systemctl --version
	I1216 07:00:16.261220 1675348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 07:00:16.275322 1675348 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 07:00:16.353354 1675348 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-16 07:00:16.331324194 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 07:00:16.353907 1675348 kubeconfig.go:125] found "ha-614518" server: "https://192.168.49.254:8443"
	I1216 07:00:16.353943 1675348 api_server.go:166] Checking apiserver status ...
	I1216 07:00:16.353998 1675348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:00:16.373746 1675348 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1256/cgroup
	I1216 07:00:16.384221 1675348 api_server.go:182] apiserver freezer: "5:freezer:/docker/e2503ac81b82256526f5aa49d6145c5c534bc177f13530507608bbd038a0fb46/crio/crio-e53f1a4d46ada82f9d550fd7d9b995db18edf2ac050485d5e8ce9d113179895b"
	I1216 07:00:16.384317 1675348 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e2503ac81b82256526f5aa49d6145c5c534bc177f13530507608bbd038a0fb46/crio/crio-e53f1a4d46ada82f9d550fd7d9b995db18edf2ac050485d5e8ce9d113179895b/freezer.state
	I1216 07:00:16.395476 1675348 api_server.go:204] freezer state: "THAWED"
	I1216 07:00:16.395505 1675348 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1216 07:00:16.404223 1675348 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1216 07:00:16.404253 1675348 status.go:463] ha-614518 apiserver status = Running (err=<nil>)
	I1216 07:00:16.404265 1675348 status.go:176] ha-614518 status: &{Name:ha-614518 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 07:00:16.404283 1675348 status.go:174] checking status of ha-614518-m02 ...
	I1216 07:00:16.404721 1675348 cli_runner.go:164] Run: docker container inspect ha-614518-m02 --format={{.State.Status}}
	I1216 07:00:16.422582 1675348 status.go:371] ha-614518-m02 host status = "Stopped" (err=<nil>)
	I1216 07:00:16.422629 1675348 status.go:384] host is not running, skipping remaining checks
	I1216 07:00:16.422637 1675348 status.go:176] ha-614518-m02 status: &{Name:ha-614518-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 07:00:16.422656 1675348 status.go:174] checking status of ha-614518-m03 ...
	I1216 07:00:16.423185 1675348 cli_runner.go:164] Run: docker container inspect ha-614518-m03 --format={{.State.Status}}
	I1216 07:00:16.441560 1675348 status.go:371] ha-614518-m03 host status = "Running" (err=<nil>)
	I1216 07:00:16.441585 1675348 host.go:66] Checking if "ha-614518-m03" exists ...
	I1216 07:00:16.441904 1675348 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614518-m03
	I1216 07:00:16.461428 1675348 host.go:66] Checking if "ha-614518-m03" exists ...
	I1216 07:00:16.461754 1675348 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 07:00:16.461801 1675348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m03
	I1216 07:00:16.480869 1675348 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34275 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518-m03/id_rsa Username:docker}
	I1216 07:00:16.574534 1675348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 07:00:16.592425 1675348 kubeconfig.go:125] found "ha-614518" server: "https://192.168.49.254:8443"
	I1216 07:00:16.592457 1675348 api_server.go:166] Checking apiserver status ...
	I1216 07:00:16.592563 1675348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:00:16.604930 1675348 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1211/cgroup
	I1216 07:00:16.616115 1675348 api_server.go:182] apiserver freezer: "5:freezer:/docker/27d99eda89d171b8d7ecfba8c63302c753f7645b4645fd524b76fef328eab58f/crio/crio-e4f5fee886d94231746a0f79f5d782fa0f1a919f198309ccca90c2e0203e2f04"
	I1216 07:00:16.616200 1675348 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/27d99eda89d171b8d7ecfba8c63302c753f7645b4645fd524b76fef328eab58f/crio/crio-e4f5fee886d94231746a0f79f5d782fa0f1a919f198309ccca90c2e0203e2f04/freezer.state
	I1216 07:00:16.625319 1675348 api_server.go:204] freezer state: "THAWED"
	I1216 07:00:16.625349 1675348 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1216 07:00:16.634109 1675348 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1216 07:00:16.634140 1675348 status.go:463] ha-614518-m03 apiserver status = Running (err=<nil>)
	I1216 07:00:16.634150 1675348 status.go:176] ha-614518-m03 status: &{Name:ha-614518-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 07:00:16.634167 1675348 status.go:174] checking status of ha-614518-m04 ...
	I1216 07:00:16.634489 1675348 cli_runner.go:164] Run: docker container inspect ha-614518-m04 --format={{.State.Status}}
	I1216 07:00:16.653402 1675348 status.go:371] ha-614518-m04 host status = "Running" (err=<nil>)
	I1216 07:00:16.653427 1675348 host.go:66] Checking if "ha-614518-m04" exists ...
	I1216 07:00:16.653734 1675348 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614518-m04
	I1216 07:00:16.672954 1675348 host.go:66] Checking if "ha-614518-m04" exists ...
	I1216 07:00:16.673299 1675348 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 07:00:16.673344 1675348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614518-m04
	I1216 07:00:16.691740 1675348 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34280 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/ha-614518-m04/id_rsa Username:docker}
	I1216 07:00:16.786653 1675348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 07:00:16.803234 1675348 status.go:176] ha-614518-m04 status: &{Name:ha-614518-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (33.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-614518 node start m02 --alsologtostderr -v 5: (31.639372523s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-614518 status --alsologtostderr -v 5: (1.233801293s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (33.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.346199529s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (123.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 stop --alsologtostderr -v 5
E1216 07:01:06.670633 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 07:01:06.817299 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 07:01:11.399157 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-487532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-614518 stop --alsologtostderr -v 5: (27.758400097s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 start --wait true --alsologtostderr -v 5
E1216 07:01:34.379007 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-614518 start --wait true --alsologtostderr -v 5: (1m35.787819436s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (123.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-614518 node delete m03 --alsologtostderr -v 5: (11.120344489s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E1216 07:03:08.325396 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-487532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-614518 stop --alsologtostderr -v 5: (36.147352193s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-614518 status --alsologtostderr -v 5: exit status 7 (110.80768ms)

                                                
                                                
-- stdout --
	ha-614518
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-614518-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-614518-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 07:03:44.765955 1687460 out.go:360] Setting OutFile to fd 1 ...
	I1216 07:03:44.766106 1687460 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 07:03:44.766138 1687460 out.go:374] Setting ErrFile to fd 2...
	I1216 07:03:44.766151 1687460 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 07:03:44.767045 1687460 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 07:03:44.767319 1687460 out.go:368] Setting JSON to false
	I1216 07:03:44.767383 1687460 mustload.go:66] Loading cluster: ha-614518
	I1216 07:03:44.767434 1687460 notify.go:221] Checking for updates...
	I1216 07:03:44.767922 1687460 config.go:182] Loaded profile config "ha-614518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:03:44.767974 1687460 status.go:174] checking status of ha-614518 ...
	I1216 07:03:44.768921 1687460 cli_runner.go:164] Run: docker container inspect ha-614518 --format={{.State.Status}}
	I1216 07:03:44.787586 1687460 status.go:371] ha-614518 host status = "Stopped" (err=<nil>)
	I1216 07:03:44.787608 1687460 status.go:384] host is not running, skipping remaining checks
	I1216 07:03:44.787614 1687460 status.go:176] ha-614518 status: &{Name:ha-614518 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 07:03:44.787644 1687460 status.go:174] checking status of ha-614518-m02 ...
	I1216 07:03:44.787949 1687460 cli_runner.go:164] Run: docker container inspect ha-614518-m02 --format={{.State.Status}}
	I1216 07:03:44.807154 1687460 status.go:371] ha-614518-m02 host status = "Stopped" (err=<nil>)
	I1216 07:03:44.807193 1687460 status.go:384] host is not running, skipping remaining checks
	I1216 07:03:44.807209 1687460 status.go:176] ha-614518-m02 status: &{Name:ha-614518-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 07:03:44.807239 1687460 status.go:174] checking status of ha-614518-m04 ...
	I1216 07:03:44.807532 1687460 cli_runner.go:164] Run: docker container inspect ha-614518-m04 --format={{.State.Status}}
	I1216 07:03:44.826982 1687460 status.go:371] ha-614518-m04 host status = "Stopped" (err=<nil>)
	I1216 07:03:44.827007 1687460 status.go:384] host is not running, skipping remaining checks
	I1216 07:03:44.827014 1687460 status.go:176] ha-614518-m04 status: &{Name:ha-614518-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (77.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 node add --control-plane --alsologtostderr -v 5
E1216 07:10:49.893317 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 07:11:06.670940 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 07:11:06.817667 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-614518 node add --control-plane --alsologtostderr -v 5: (1m16.707652882s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-614518 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-614518 status --alsologtostderr -v 5: (1.03398754s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (77.74s)

                                                
                                    
x
+
TestJSONOutput/start/Command (80.88s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-770419 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1216 07:12:29.740700 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 07:13:08.325388 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-487532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-770419 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m20.881315115s)
--- PASS: TestJSONOutput/start/Command (80.88s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.88s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-770419 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-770419 --output=json --user=testUser: (5.875203151s)
--- PASS: TestJSONOutput/stop/Command (5.88s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-108895 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-108895 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (104.82983ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ee3a4b01-6040-4a03-b08b-499ce6ae02b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-108895] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7f65c17f-8c2c-4f6a-b0d2-d3b5fd885253","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22141"}}
	{"specversion":"1.0","id":"0e4235bb-e8d9-44d0-a1e1-2415c9ce9462","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"679d4161-1f93-41d4-baba-bd99635d3634","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig"}}
	{"specversion":"1.0","id":"34821839-0f41-4431-a963-8f6a39da3def","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube"}}
	{"specversion":"1.0","id":"f4be6402-b87d-4c75-9db7-b2991f439fbb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"e2857ba5-8a4b-4a80-8e7a-60690ff03d74","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"493db65c-5659-4bda-bd05-80e9c7dd060c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-108895" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-108895
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (42.44s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-538626 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-538626 --network=: (40.211638813s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-538626" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-538626
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-538626: (2.194935128s)
--- PASS: TestKicCustomNetwork/create_custom_network (42.44s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.75s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-171054 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-171054 --network=bridge: (33.637780711s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-171054" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-171054
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-171054: (2.088458153s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.75s)

                                                
                                    
x
+
TestKicExistingNetwork (33.26s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1216 07:14:49.152291 1599255 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1216 07:14:49.168871 1599255 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1216 07:14:49.168945 1599255 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1216 07:14:49.168962 1599255 cli_runner.go:164] Run: docker network inspect existing-network
W1216 07:14:49.189162 1599255 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1216 07:14:49.189191 1599255 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1216 07:14:49.189208 1599255 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1216 07:14:49.189320 1599255 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1216 07:14:49.207463 1599255 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-34c8049a560a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:55:f3:91:6e:93} reservation:<nil>}
I1216 07:14:49.207830 1599255 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40016a33e0}
I1216 07:14:49.207852 1599255 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1216 07:14:49.207905 1599255 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1216 07:14:49.267555 1599255 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-583001 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-583001 --network=existing-network: (30.988199196s)
helpers_test.go:176: Cleaning up "existing-network-583001" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-583001
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-583001: (2.116463743s)
I1216 07:15:22.389778 1599255 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (33.26s)

                                                
                                    
x
+
TestKicCustomSubnet (35.76s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-995773 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-995773 --subnet=192.168.60.0/24: (33.471905946s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-995773 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-995773" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-995773
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-995773: (2.263062972s)
--- PASS: TestKicCustomSubnet (35.76s)

                                                
                                    
x
+
TestKicStaticIP (39.33s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-495283 --static-ip=192.168.200.200
E1216 07:16:06.677255 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 07:16:06.817961 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-495283 --static-ip=192.168.200.200: (36.907620358s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-495283 ip
helpers_test.go:176: Cleaning up "static-ip-495283" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-495283
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-495283: (2.261374354s)
--- PASS: TestKicStaticIP (39.33s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (73.36s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-752215 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-752215 --driver=docker  --container-runtime=crio: (34.24865669s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-754833 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-754833 --driver=docker  --container-runtime=crio: (33.228837185s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-752215
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-754833
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-754833" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-754833
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-754833: (2.172878757s)
helpers_test.go:176: Cleaning up "first-752215" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-752215
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-752215: (2.046988448s)
--- PASS: TestMinikubeProfile (73.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.8s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-984454 --memory=3072 --mount-string /tmp/TestMountStartserial1245317266/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E1216 07:17:51.401197 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-487532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-984454 --memory=3072 --mount-string /tmp/TestMountStartserial1245317266/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.795683714s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.80s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-984454 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.09s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-986333 --memory=3072 --mount-string /tmp/TestMountStartserial1245317266/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-986333 --memory=3072 --mount-string /tmp/TestMountStartserial1245317266/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.088505291s)
E1216 07:18:08.327541 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-487532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMountStart/serial/StartWithMountSecond (9.09s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-986333 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-984454 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-984454 --alsologtostderr -v=5: (1.713870545s)
--- PASS: TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-986333 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-986333
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-986333: (1.298666076s)
--- PASS: TestMountStart/serial/Stop (1.30s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.84s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-986333
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-986333: (6.837336086s)
--- PASS: TestMountStart/serial/RestartStopped (7.84s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-986333 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (133.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-726681 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-726681 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m12.600727696s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (133.13s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-726681 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-726681 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-726681 -- rollout status deployment/busybox: (3.20488564s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-726681 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-726681 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-726681 -- exec busybox-7b57f96db7-ggdmw -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-726681 -- exec busybox-7b57f96db7-ptr2n -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-726681 -- exec busybox-7b57f96db7-ggdmw -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-726681 -- exec busybox-7b57f96db7-ptr2n -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-726681 -- exec busybox-7b57f96db7-ggdmw -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-726681 -- exec busybox-7b57f96db7-ptr2n -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.94s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-726681 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-726681 -- exec busybox-7b57f96db7-ggdmw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-726681 -- exec busybox-7b57f96db7-ggdmw -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-726681 -- exec busybox-7b57f96db7-ptr2n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-726681 -- exec busybox-7b57f96db7-ptr2n -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (57.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-726681 -v=5 --alsologtostderr
E1216 07:21:06.670675 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 07:21:06.817497 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-726681 -v=5 --alsologtostderr: (57.118076128s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (57.80s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-726681 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.75s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 cp testdata/cp-test.txt multinode-726681:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 ssh -n multinode-726681 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 cp multinode-726681:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2118633609/001/cp-test_multinode-726681.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 ssh -n multinode-726681 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 cp multinode-726681:/home/docker/cp-test.txt multinode-726681-m02:/home/docker/cp-test_multinode-726681_multinode-726681-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 ssh -n multinode-726681 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 ssh -n multinode-726681-m02 "sudo cat /home/docker/cp-test_multinode-726681_multinode-726681-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 cp multinode-726681:/home/docker/cp-test.txt multinode-726681-m03:/home/docker/cp-test_multinode-726681_multinode-726681-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 ssh -n multinode-726681 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 ssh -n multinode-726681-m03 "sudo cat /home/docker/cp-test_multinode-726681_multinode-726681-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 cp testdata/cp-test.txt multinode-726681-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 ssh -n multinode-726681-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 cp multinode-726681-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2118633609/001/cp-test_multinode-726681-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 ssh -n multinode-726681-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 cp multinode-726681-m02:/home/docker/cp-test.txt multinode-726681:/home/docker/cp-test_multinode-726681-m02_multinode-726681.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 ssh -n multinode-726681-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 ssh -n multinode-726681 "sudo cat /home/docker/cp-test_multinode-726681-m02_multinode-726681.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 cp multinode-726681-m02:/home/docker/cp-test.txt multinode-726681-m03:/home/docker/cp-test_multinode-726681-m02_multinode-726681-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 ssh -n multinode-726681-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 ssh -n multinode-726681-m03 "sudo cat /home/docker/cp-test_multinode-726681-m02_multinode-726681-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 cp testdata/cp-test.txt multinode-726681-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 ssh -n multinode-726681-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 cp multinode-726681-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2118633609/001/cp-test_multinode-726681-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 ssh -n multinode-726681-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 cp multinode-726681-m03:/home/docker/cp-test.txt multinode-726681:/home/docker/cp-test_multinode-726681-m03_multinode-726681.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 ssh -n multinode-726681-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 ssh -n multinode-726681 "sudo cat /home/docker/cp-test_multinode-726681-m03_multinode-726681.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 cp multinode-726681-m03:/home/docker/cp-test.txt multinode-726681-m02:/home/docker/cp-test_multinode-726681-m03_multinode-726681-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 ssh -n multinode-726681-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 ssh -n multinode-726681-m02 "sudo cat /home/docker/cp-test_multinode-726681-m03_multinode-726681-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.55s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-726681 node stop m03: (1.320567387s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-726681 status: exit status 7 (534.140587ms)

                                                
                                                
-- stdout --
	multinode-726681
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-726681-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-726681-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-726681 status --alsologtostderr: exit status 7 (545.781341ms)

                                                
                                                
-- stdout --
	multinode-726681
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-726681-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-726681-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 07:21:52.801732 1739777 out.go:360] Setting OutFile to fd 1 ...
	I1216 07:21:52.801896 1739777 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 07:21:52.801918 1739777 out.go:374] Setting ErrFile to fd 2...
	I1216 07:21:52.801938 1739777 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 07:21:52.802306 1739777 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 07:21:52.802552 1739777 out.go:368] Setting JSON to false
	I1216 07:21:52.802607 1739777 mustload.go:66] Loading cluster: multinode-726681
	I1216 07:21:52.803393 1739777 config.go:182] Loaded profile config "multinode-726681": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:21:52.803443 1739777 status.go:174] checking status of multinode-726681 ...
	I1216 07:21:52.804296 1739777 cli_runner.go:164] Run: docker container inspect multinode-726681 --format={{.State.Status}}
	I1216 07:21:52.805782 1739777 notify.go:221] Checking for updates...
	I1216 07:21:52.824876 1739777 status.go:371] multinode-726681 host status = "Running" (err=<nil>)
	I1216 07:21:52.824898 1739777 host.go:66] Checking if "multinode-726681" exists ...
	I1216 07:21:52.825206 1739777 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-726681
	I1216 07:21:52.858016 1739777 host.go:66] Checking if "multinode-726681" exists ...
	I1216 07:21:52.858327 1739777 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 07:21:52.858372 1739777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-726681
	I1216 07:21:52.879724 1739777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34385 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/multinode-726681/id_rsa Username:docker}
	I1216 07:21:52.974233 1739777 ssh_runner.go:195] Run: systemctl --version
	I1216 07:21:52.980989 1739777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 07:21:52.994562 1739777 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 07:21:53.062858 1739777 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-16 07:21:53.052824695 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 07:21:53.063476 1739777 kubeconfig.go:125] found "multinode-726681" server: "https://192.168.67.2:8443"
	I1216 07:21:53.063523 1739777 api_server.go:166] Checking apiserver status ...
	I1216 07:21:53.063572 1739777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 07:21:53.075212 1739777 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1240/cgroup
	I1216 07:21:53.084385 1739777 api_server.go:182] apiserver freezer: "5:freezer:/docker/aa07ffb2eef55c09e40d4e69787310edcef43d8f9d9c937ca0d76a737ffa7ff7/crio/crio-68cb050c251820834d3fe94bc47689b3cad0f4b72465c926d68f76bab899c92e"
	I1216 07:21:53.084557 1739777 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/aa07ffb2eef55c09e40d4e69787310edcef43d8f9d9c937ca0d76a737ffa7ff7/crio/crio-68cb050c251820834d3fe94bc47689b3cad0f4b72465c926d68f76bab899c92e/freezer.state
	I1216 07:21:53.092773 1739777 api_server.go:204] freezer state: "THAWED"
	I1216 07:21:53.092799 1739777 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1216 07:21:53.102117 1739777 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1216 07:21:53.102202 1739777 status.go:463] multinode-726681 apiserver status = Running (err=<nil>)
	I1216 07:21:53.102228 1739777 status.go:176] multinode-726681 status: &{Name:multinode-726681 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 07:21:53.102268 1739777 status.go:174] checking status of multinode-726681-m02 ...
	I1216 07:21:53.102618 1739777 cli_runner.go:164] Run: docker container inspect multinode-726681-m02 --format={{.State.Status}}
	I1216 07:21:53.120290 1739777 status.go:371] multinode-726681-m02 host status = "Running" (err=<nil>)
	I1216 07:21:53.120327 1739777 host.go:66] Checking if "multinode-726681-m02" exists ...
	I1216 07:21:53.120727 1739777 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-726681-m02
	I1216 07:21:53.137894 1739777 host.go:66] Checking if "multinode-726681-m02" exists ...
	I1216 07:21:53.138233 1739777 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 07:21:53.138286 1739777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-726681-m02
	I1216 07:21:53.161054 1739777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34390 SSHKeyPath:/home/jenkins/minikube-integration/22141-1596013/.minikube/machines/multinode-726681-m02/id_rsa Username:docker}
	I1216 07:21:53.257720 1739777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 07:21:53.271369 1739777 status.go:176] multinode-726681-m02 status: &{Name:multinode-726681-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1216 07:21:53.271404 1739777 status.go:174] checking status of multinode-726681-m03 ...
	I1216 07:21:53.271773 1739777 cli_runner.go:164] Run: docker container inspect multinode-726681-m03 --format={{.State.Status}}
	I1216 07:21:53.290331 1739777 status.go:371] multinode-726681-m03 host status = "Stopped" (err=<nil>)
	I1216 07:21:53.290371 1739777 status.go:384] host is not running, skipping remaining checks
	I1216 07:21:53.290383 1739777 status.go:176] multinode-726681-m03 status: &{Name:multinode-726681-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.40s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-726681 node start m03 -v=5 --alsologtostderr: (7.945568745s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.75s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (79.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-726681
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-726681
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-726681: (25.106150979s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-726681 --wait=true -v=5 --alsologtostderr
E1216 07:23:08.326039 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-487532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-726681 --wait=true -v=5 --alsologtostderr: (54.661860188s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-726681
--- PASS: TestMultiNode/serial/RestartKeepsNodes (79.92s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-726681 node delete m03: (4.935191444s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.63s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-726681 stop: (23.844584006s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-726681 status: exit status 7 (103.959384ms)

                                                
                                                
-- stdout --
	multinode-726681
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-726681-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-726681 status --alsologtostderr: exit status 7 (92.950269ms)

                                                
                                                
-- stdout --
	multinode-726681
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-726681-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 07:23:51.597265 1747656 out.go:360] Setting OutFile to fd 1 ...
	I1216 07:23:51.597395 1747656 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 07:23:51.597405 1747656 out.go:374] Setting ErrFile to fd 2...
	I1216 07:23:51.597411 1747656 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 07:23:51.597644 1747656 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 07:23:51.597828 1747656 out.go:368] Setting JSON to false
	I1216 07:23:51.597858 1747656 mustload.go:66] Loading cluster: multinode-726681
	I1216 07:23:51.597910 1747656 notify.go:221] Checking for updates...
	I1216 07:23:51.598252 1747656 config.go:182] Loaded profile config "multinode-726681": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:23:51.598275 1747656 status.go:174] checking status of multinode-726681 ...
	I1216 07:23:51.598860 1747656 cli_runner.go:164] Run: docker container inspect multinode-726681 --format={{.State.Status}}
	I1216 07:23:51.617191 1747656 status.go:371] multinode-726681 host status = "Stopped" (err=<nil>)
	I1216 07:23:51.617217 1747656 status.go:384] host is not running, skipping remaining checks
	I1216 07:23:51.617225 1747656 status.go:176] multinode-726681 status: &{Name:multinode-726681 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 07:23:51.617251 1747656 status.go:174] checking status of multinode-726681-m02 ...
	I1216 07:23:51.617558 1747656 cli_runner.go:164] Run: docker container inspect multinode-726681-m02 --format={{.State.Status}}
	I1216 07:23:51.641435 1747656 status.go:371] multinode-726681-m02 host status = "Stopped" (err=<nil>)
	I1216 07:23:51.641461 1747656 status.go:384] host is not running, skipping remaining checks
	I1216 07:23:51.641469 1747656 status.go:176] multinode-726681-m02 status: &{Name:multinode-726681-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.04s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (48.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-726681 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-726681 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (48.243225212s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-726681 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (48.97s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-726681
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-726681-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-726681-m02 --driver=docker  --container-runtime=crio: exit status 14 (100.508385ms)

                                                
                                                
-- stdout --
	* [multinode-726681-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22141
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-726681-m02' is duplicated with machine name 'multinode-726681-m02' in profile 'multinode-726681'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-726681-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-726681-m03 --driver=docker  --container-runtime=crio: (33.047667241s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-726681
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-726681: exit status 80 (338.799628ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-726681 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-726681-m03 already exists in multinode-726681-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-726681-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-726681-m03: (2.130368723s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.68s)

                                                
                                    
x
+
TestPreload (121.5s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-163648 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
E1216 07:26:06.671090 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 07:26:06.817281 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-163648 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (59.936183823s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-163648 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 -p test-preload-163648 image pull gcr.io/k8s-minikube/busybox: (2.142466847s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-163648
preload_test.go:55: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-163648: (5.973102707s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-163648 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-163648 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (50.711888278s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-163648 image list
helpers_test.go:176: Cleaning up "test-preload-163648" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-163648
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-163648: (2.493133836s)
--- PASS: TestPreload (121.50s)

                                                
                                    
x
+
TestScheduledStopUnix (109.77s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-786615 --memory=3072 --driver=docker  --container-runtime=crio
E1216 07:27:29.894784 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-786615 --memory=3072 --driver=docker  --container-runtime=crio: (33.061681126s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-786615 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1216 07:27:55.360929 1761826 out.go:360] Setting OutFile to fd 1 ...
	I1216 07:27:55.361117 1761826 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 07:27:55.361130 1761826 out.go:374] Setting ErrFile to fd 2...
	I1216 07:27:55.361136 1761826 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 07:27:55.361537 1761826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 07:27:55.362052 1761826 out.go:368] Setting JSON to false
	I1216 07:27:55.362248 1761826 mustload.go:66] Loading cluster: scheduled-stop-786615
	I1216 07:27:55.362747 1761826 config.go:182] Loaded profile config "scheduled-stop-786615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:27:55.362888 1761826 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/scheduled-stop-786615/config.json ...
	I1216 07:27:55.363215 1761826 mustload.go:66] Loading cluster: scheduled-stop-786615
	I1216 07:27:55.363416 1761826 config.go:182] Loaded profile config "scheduled-stop-786615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-786615 -n scheduled-stop-786615
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-786615 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1216 07:27:55.825092 1761916 out.go:360] Setting OutFile to fd 1 ...
	I1216 07:27:55.825280 1761916 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 07:27:55.825308 1761916 out.go:374] Setting ErrFile to fd 2...
	I1216 07:27:55.825328 1761916 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 07:27:55.825623 1761916 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 07:27:55.825964 1761916 out.go:368] Setting JSON to false
	I1216 07:27:55.826236 1761916 daemonize_unix.go:73] killing process 1761843 as it is an old scheduled stop
	I1216 07:27:55.826360 1761916 mustload.go:66] Loading cluster: scheduled-stop-786615
	I1216 07:27:55.826797 1761916 config.go:182] Loaded profile config "scheduled-stop-786615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:27:55.826922 1761916 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/scheduled-stop-786615/config.json ...
	I1216 07:27:55.827207 1761916 mustload.go:66] Loading cluster: scheduled-stop-786615
	I1216 07:27:55.827390 1761916 config.go:182] Loaded profile config "scheduled-stop-786615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:180: process 1761843 is a zombie
I1216 07:27:55.832351 1599255 retry.go:31] will retry after 108.022µs: open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/scheduled-stop-786615/pid: no such file or directory
I1216 07:27:55.835360 1599255 retry.go:31] will retry after 158.808µs: open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/scheduled-stop-786615/pid: no such file or directory
I1216 07:27:55.835838 1599255 retry.go:31] will retry after 314.768µs: open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/scheduled-stop-786615/pid: no such file or directory
I1216 07:27:55.839884 1599255 retry.go:31] will retry after 194.882µs: open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/scheduled-stop-786615/pid: no such file or directory
I1216 07:27:55.841097 1599255 retry.go:31] will retry after 665.568µs: open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/scheduled-stop-786615/pid: no such file or directory
I1216 07:27:55.842302 1599255 retry.go:31] will retry after 618.231µs: open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/scheduled-stop-786615/pid: no such file or directory
I1216 07:27:55.843472 1599255 retry.go:31] will retry after 1.58334ms: open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/scheduled-stop-786615/pid: no such file or directory
I1216 07:27:55.845706 1599255 retry.go:31] will retry after 1.784506ms: open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/scheduled-stop-786615/pid: no such file or directory
I1216 07:27:55.847664 1599255 retry.go:31] will retry after 2.179565ms: open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/scheduled-stop-786615/pid: no such file or directory
I1216 07:27:55.850697 1599255 retry.go:31] will retry after 4.991743ms: open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/scheduled-stop-786615/pid: no such file or directory
I1216 07:27:55.855919 1599255 retry.go:31] will retry after 8.151284ms: open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/scheduled-stop-786615/pid: no such file or directory
I1216 07:27:55.865142 1599255 retry.go:31] will retry after 10.896951ms: open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/scheduled-stop-786615/pid: no such file or directory
I1216 07:27:55.876412 1599255 retry.go:31] will retry after 9.751886ms: open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/scheduled-stop-786615/pid: no such file or directory
I1216 07:27:55.886721 1599255 retry.go:31] will retry after 17.010124ms: open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/scheduled-stop-786615/pid: no such file or directory
I1216 07:27:55.903950 1599255 retry.go:31] will retry after 22.74007ms: open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/scheduled-stop-786615/pid: no such file or directory
I1216 07:27:55.928989 1599255 retry.go:31] will retry after 48.572858ms: open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/scheduled-stop-786615/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-786615 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1216 07:28:08.326237 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-487532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-786615 -n scheduled-stop-786615
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-786615
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-786615 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1216 07:28:21.797844 1762275 out.go:360] Setting OutFile to fd 1 ...
	I1216 07:28:21.798046 1762275 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 07:28:21.798077 1762275 out.go:374] Setting ErrFile to fd 2...
	I1216 07:28:21.798102 1762275 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 07:28:21.798380 1762275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-1596013/.minikube/bin
	I1216 07:28:21.798661 1762275 out.go:368] Setting JSON to false
	I1216 07:28:21.798800 1762275 mustload.go:66] Loading cluster: scheduled-stop-786615
	I1216 07:28:21.799251 1762275 config.go:182] Loaded profile config "scheduled-stop-786615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 07:28:21.799393 1762275 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/scheduled-stop-786615/config.json ...
	I1216 07:28:21.799614 1762275 mustload.go:66] Loading cluster: scheduled-stop-786615
	I1216 07:28:21.799774 1762275 config.go:182] Loaded profile config "scheduled-stop-786615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-786615
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-786615: exit status 7 (65.906394ms)

                                                
                                                
-- stdout --
	scheduled-stop-786615
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-786615 -n scheduled-stop-786615
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-786615 -n scheduled-stop-786615: exit status 7 (64.443389ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-786615" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-786615
E1216 07:29:09.742088 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-786615: (5.0594547s)
--- PASS: TestScheduledStopUnix (109.77s)

                                                
                                    
x
+
TestInsufficientStorage (12.54s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-370177 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-370177 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (9.908155838s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9b033cfc-c19f-495f-963d-4b31e48c5f50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-370177] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0eb400bc-5909-48d5-bc1f-39526450c2f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22141"}}
	{"specversion":"1.0","id":"7832334a-06d1-4907-92fc-f8d4123fe609","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5d403d5d-8c7d-4fef-9c3f-13f80a46b25f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig"}}
	{"specversion":"1.0","id":"e6d8aa69-10f6-4a66-8d1c-db659451ca68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube"}}
	{"specversion":"1.0","id":"bb6f880b-d73d-4887-8643-e0aa47d4898e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"b0d0cf9d-f062-428d-902d-4c873c33a56d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7e309928-6de9-4c27-b243-77acc17a0376","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"eb7f3506-e1b8-40c0-a16e-34528107d248","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"7aacd330-a7f1-4320-a675-b218ad86913b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a70c8f20-0bdf-4352-9bca-7a69e9dca7d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"005454e1-8f36-4dd8-85f1-9f06ce3e28d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-370177\" primary control-plane node in \"insufficient-storage-370177\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"21dd5409-f533-426e-91dd-c0ae32837a21","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765661130-22141 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"694cf998-5984-4f53-8400-dc6e213ad8d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"f15e46a7-02cc-40e2-9677-7543ef5718ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-370177 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-370177 --output=json --layout=cluster: exit status 7 (321.400133ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-370177","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-370177","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 07:29:22.209756 1763982 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-370177" does not appear in /home/jenkins/minikube-integration/22141-1596013/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-370177 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-370177 --output=json --layout=cluster: exit status 7 (288.796704ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-370177","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-370177","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 07:29:22.500680 1764047 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-370177" does not appear in /home/jenkins/minikube-integration/22141-1596013/kubeconfig
	E1216 07:29:22.510680 1764047 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/insufficient-storage-370177/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-370177" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-370177
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-370177: (2.022724694s)
--- PASS: TestInsufficientStorage (12.54s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (299.8s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.267533680 start -p running-upgrade-033810 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.267533680 start -p running-upgrade-033810 --memory=3072 --vm-driver=docker  --container-runtime=crio: (31.410457892s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-033810 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1216 07:34:31.403431 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-487532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-033810 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m25.23201322s)
helpers_test.go:176: Cleaning up "running-upgrade-033810" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-033810
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-033810: (1.953400829s)
--- PASS: TestRunningBinaryUpgrade (299.80s)

                                                
                                    
x
+
TestMissingContainerUpgrade (109.14s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.753863951 start -p missing-upgrade-205314 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.753863951 start -p missing-upgrade-205314 --memory=3072 --driver=docker  --container-runtime=crio: (1m2.709725086s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-205314
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-205314
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-205314 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1216 07:31:06.671534 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 07:31:06.817025 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-205314 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (42.738509372s)
helpers_test.go:176: Cleaning up "missing-upgrade-205314" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-205314
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-205314: (2.054434059s)
--- PASS: TestMissingContainerUpgrade (109.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-310359 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-310359 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (96.081845ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-310359] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22141
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22141-1596013/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-1596013/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (49.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-310359 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-310359 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (49.202583824s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-310359 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (49.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (105.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-310359 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-310359 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m42.516274129s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-310359 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-310359 status -o json: exit status 2 (429.477964ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-310359","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-310359
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-310359: (2.430270365s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (105.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-310359 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-310359 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (9.145977396s)
--- PASS: TestNoKubernetes/serial/Start (9.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22141-1596013/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-310359 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-310359 "sudo systemctl is-active --quiet service kubelet": exit status 1 (267.087351ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-310359
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-310359: (1.386832559s)
--- PASS: TestNoKubernetes/serial/Stop (1.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-310359 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-310359 --driver=docker  --container-runtime=crio: (8.296141797s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-310359 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-310359 "sudo systemctl is-active --quiet service kubelet": exit status 1 (416.466284ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.79s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (301.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.711971306 start -p stopped-upgrade-021632 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.711971306 start -p stopped-upgrade-021632 --memory=3072 --vm-driver=docker  --container-runtime=crio: (31.020935938s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.711971306 -p stopped-upgrade-021632 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.711971306 -p stopped-upgrade-021632 stop: (1.280864361s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-021632 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1216 07:41:06.671484 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-364120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 07:41:06.817216 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 07:43:08.326361 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/functional-487532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-021632 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m29.104309004s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (301.41s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-021632
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-021632: (1.698725059s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.70s)

                                                
                                    
x
+
TestPause/serial/Start (81.43s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-375517 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1216 07:44:09.896628 1599255 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-1596013/.minikube/profiles/addons-142606/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-375517 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m21.433084419s)
--- PASS: TestPause/serial/Start (81.43s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (28.94s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-375517 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-375517 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (28.922782389s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (28.94s)

                                                
                                    

Test skip (36/316)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0.45
31 TestOffline 0
42 TestAddons/serial/GCPAuth/RealCredentials 0.01
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
112 TestFunctional/parallel/MySQL 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
132 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
261 TestGvisorAddon 0
283 TestImageBuild 0
284 TestISOImage 0
348 TestChangeNoneUser 0
351 TestScheduledStopWindows 0
353 TestSkaffold 0
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.45s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-840918 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-840918" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-840918
--- SKIP: TestDownloadOnlyKic (0.45s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
Copied to clipboard